Best practices for setting up and organizing a single Programmer Instrument to be effected by multiple event effects chains?

I’ve got a curiosity I haven’t yet seen answered, though I imagine it’s a fairly common scenario and the knowledge would be useful to others.

How would we set something like this up? (And any Unity specifics to be aware of.)

  • A sound is played from within the game through a user-selected “audio filter” — like a dropdown menu with different choices, which results in a UI slider being shown that can go from 0-100.
  • It gets handed over to FMOD as a Programmer Instrument in a single event.
  • The main purpose of the event is to treat it with an effects chain, and also some additional ambient textures. For example, say I want to make something sound like an old TV, it’d pass through an IR and several other things.

Overall, the intent is to take a UGC sound and then colorize/stylize it with effects.

So structurally, that looks like each dropdown menu choice = a different FMOD Event. One at a time, mentally I can get my head around that, I’ve done similar things before where I had different events setup for footstep sounds corresponding with visual materials, and other context-changing stuff. It seems like that’d be reasonable to organize, and keep adding new options in the future. And on a usability level, could be less cluttered than setting up some kind of fancy mixer routing (unless there are options available to us I’m unaware of).

BUT… what happens if we want to open it up so that a single instance of a sound can be played through more than one at a time? Meaning the dry signal isn’t played multiple times (and hence, isn’t any louder and startling with the jump in loudness), and instead of a single-selection dropdown menu, I could click checkboxes for 2, 3, or even 4 of these “audio filters”. And the routing is in parallel.

Conceptually this seems kind of like the build-in Unity Reverb Zones, but basically playing an instance of one sound through multiple Reverb Zone-esque signal chains, as it were. Although I notice Unity - Manual: Reverb Zones mentions “You can mix reverb zones to create combined effects.”, so I’m wondering what the equivalent structure looks like in an FMOD Studio project.

What might be a good way to establish this?

I think setting up Send Buses for each of your desired effects and using parameters to automate the volume of the send would be the way to go. Here is a simple example using discrete parameters to set the send level to 0db when active and -∞db when inactive:


Note that the signal path would be fixed in this case. Alternatively you could add effects to the event at runtime using System::createDSPByType and ChannelControl::addDSP, which would allow you to setup effect chains in any order.

@jeff_fmod Thank you for the thoughtful reply and the very helpful screenshot example. :sparkling_heart:

I should clarify that in some cases, sends might be unsuitable when I’m looking “insert effects-style” processing. For example, if I’m treating a sound with the Pitch Shifter and I don’t want any dry signal, I want it to be 100% wet — so a send isn’t the right answer here, right?

Otherwise, if that’s a workable tradeoff, then it’d be something like:

  • 1 send for each unique effects chain
  • More than 1 send at a time could be used to “stack” multiple effects chains in parallel
  • Just need to keep everything organized as the # of sends (instead of events used for effects processing) grows, but similar to folders for events, I can do groups for returns in Mixer window, yeah?

(I’m presuming sends behave similarly in FMOD Studio to how they do in Ableton and other DAWs in this regard, where the dry signal still comes through in full, and the wet signal is adjusted to taste.)

Good to know about the code options that I can make our engineers aware of.

If you need to adjust wet/dry balance of effects at runtime then you will either need to setup parameters to automate the wet/dry level on the DPS on the sends, or set the wet dry levels on the DSPs yourself using code. It sounds to me that your use case for DSPs requires more flexibility than we provide inside FMOD Studio so I would suggest you do all of this with the FMOD API instead.
We have a C# example of how to add DSPs to a bus at runtime in our Spectrum Analysis Example, which should be a good starting point.

1 Like