How do Parameter Priorities work?

Hi there, I’m a student learning FMOD and Game Audio through my university and I’m working on a Final Project surrounding that. I wanted some clarification on something that I’ve discovered whilst trying to do some integration work on my music today.

I have designed the music around 4 central areas in my platformer level, where each area results in a unique mix of my Music Event based on triggers being sent from Unreal to FMOD, to do automation and parameter work yaddy yada… What I’m confused about is in a situation where multiple parameters affect one control, which parameter is selected as the ‘dominant’ value, so to speak:

My parameter “AreaMiddle” dictates that Piano Main is muted at 0 and 0db at 1; “AreaPlatform” does the same thing. Ideally, if either of the two are set to 1, it would set the track back to 0db, but instead I require both to be set to 1.

Additionally, another parameter “NearOrb” sets all tracks to mute except for one, and when this is set to 0, even with “AreaMiddle” and “AreaPlatform” set to 1, the Piano Main track is muted;

This makes sense to me if for volume automation, muting/lowering is a higher priority than increasing if using multiple parameters, but my real question is: why? Is this something that can be changed? How do these priorities work for controls on other effects? My assumption here is that whenever multiple parameters are mapped to the same control, it always takes the lowest value, is this the case?

I have gotten around this by having multiple Event Instruments nested in my Music Event on different tracks, each containing the instruments with the unique mix I require for the different areas; therefore each volume control is only being affected by their “Area___” parameter and the “NearOrb” parameter, which works for my purposes, and is consistent with my hypothesis above: but if someone could clarify this for me, that would be really helpful!

This is a complex topic, so I’m afraid this is going to be a long and rambling post. Please accept my apologies for that.

To start with, there is no “dominant” automation curve. Instead, whenever a property is automated on multiple different parameters, all of them are taken into account, and the values to which they set the property are combined in order to generate the final value.

As for how they’re combined, that actually depends on which property is being automated. Different properties can represent very different things that work in very different ways, so we’ve given each property a method of combination that suits what that property needs:

  • For properties that are measured in dB and can go as low as -∞ dB (like the track and instrument volume properties you mentioned), the property values are summed together unless at least one of those property values is -∞ dB, in which case the final value is -∞.
  • For properties measured in percentages, the values are multiplied together.
  • For track panner properties other than height, extent, and LFE and properties other than extent in the pan override drawer of a spatializer effect, the values are summed together.
  • For the compressor effect’s “Threshold” property, the lowpass effect’s “Cutoff” property, and the reverb effect’s High Cut property, the lowest value is used.
  • For the compressor effect’s “Ratio” property, the chorus effect’s “Depth” and “Mix” properties, the delay effect’s “Feedback” property, the highpass effect’s “Cutoff” and “Resonance” properties, the highpass simple effect’s “Cutoff” property, the lowpass effect’s “Cutoff” and “Resonance” properties, the lowpass simple effect’s “Cutoff” property, the spatializer effect’s pan override drawer’s “Extent” and “Mix” properties, any instrument’s “Start Offset” property, the scatterer instrument’s “Spawn Total” and “Spawn Rate” properties, and the track panner’s “Extent” property, the highest value is used.
  • For all other properties, the values are averaged.

Incidentally, this information is all covered in our documentation.

There’s currently no way for you to change which combination method is used for a given property. A few people have already asked that we add a feature that lets them do it, though, so it’s already in our feature/improvement tracker. I’ll add you to the list of people who’ve made the request.

As you may have already worked out from reading my explanation above, your hypothesis is slightly off the mark. In reality, track volume and instrument volume are both properties measured in decibels which can go all the way down to negative infinity, so when they’re automated on multiple parameters, the values they’re set to by those parameters are summed together to get the final property value. This produces identical values to your hypothesis in cases where all but one of the automation curves is setting the property to 0 dB and no automation curve is setting it to a positive value, but different values otherwise.

In any case, your workaround will work just fine as a way of getting the behavior you want.

Thank you so much for the incredibly detailed explanation! This is exactly what I was looking and incredibly helpful.

This would be useful to see in the future for sure, and would add an extra layer of complexity that I think would help a lot of people with edge-case scenarios, as well as more common ones like mine.

EDIT: I only just realized you linked to the documentation after reading this reply initially from my email, thanks for that!