Virtualising instances with changing levels

I’m working on a game prototype in which there are many, many emitters around the map, all playing the same event. Let’s say the event is a fire, which you can put out with water, changing the instance’s “strength”. I’m struggling to get virtualising voices to work well, because there are parameters driving different things in the event - high frequency and low frequency content based on distance, the overall “strength” of the sound which you can manipulate by interacting with it, and so on.

There are no other events present, so event priority is not a concern.

Currently, the event is like so:

  • High frequency track, automated by distance and strength
  • Mid frequency track, as above
    – the above two tracks are routed into a new general track
  • sizzle one-shot track which plays when the strength is automated (putting water on the fire)
  • sizzle loop track which plays when the strength is automated
    – the above two tracks are routed into a new sizzle track
    ---- The master track is automated by the fire strength, and has a spatialiser on it.

All volume automation is done on gain plugins, and not on the track faders.

As you can see, there is a lot of gain automation going on. The problem I’m having is that if I set the max instances to 5 or so, as the level of the event goes down due to my interacting with it, at a certain point it is stolen and just abruptly cuts out. I can see this in the profiler, and raising the max instances stops this from happening.

I understand that there is a list of things considered to work out the instance’s audibility, but is there any way in FMOD to maintain high priority on nearby events? The threshold for what it considers no longer sufficiently “audible” is far too high for my situation, and I haven’t found any way to make it behave. The only other option I have at the moment is I believe to try and make a culling system in unreal, where sounds are played and stopped based on distance, but as this is a prototype, it would be great to not have to go down that road.

Any thoughts? Thanks for your help!

Well, there’s your problem. You have set the event’s max instances to five, and have more than five instances playing at audible volumes. That means that instances must be culled when playing audibly, and so will be suddenly cut off.

Yes, but it wouldn’t solve the problem: Audible instances of the event would still be virtualized or culled while audible and so would still cut off abruptly; the only difference would be that it would affect the furthest event instances instead of the quietest.

Unfortunately, without knowing more about what you’re trying to achieve, it’s impossible to say what you should do differently to achieve it. What behavior are you trying to achieve, and why have you set the event’s max instances to five?

Whoops, I wasn’t very clear there. I meant to say that if I set the max instances to 5 just as an example - I would like to have more than that. The problem is that if I have 6 audio sources (bubbling cauldrons) in a line in front of me, and each one is playing an event where the limit is 5 voices (just an example), as I interact with the first one in the line and decrease its level, it will abruptly cut out once it gets to a certain point.

I understand that it’s because it’s gone below a certain level, but I would much rather have the nearby sounds with which you are interacting with keep their sound, and the further away ones cut out, regardless of level. I think this would be basically unnoticeable due to the nature of the sound, especially if the max count is something more like 15 – you wouldn’t notice the 15th furthest away (and quieter) sound be culled, as it’s the same loop as 14 other emitters nearby. You will certainly notice the one near you that you are interacting with suddenly stop though.

Is it possible to do something like this in FMOD, or with some relatively simple blueprint work that won’t require making a system which starts and stops sounds based on distance? I feel later down the line that may be necessary, but for now it would be a massive time saver to not have to do this for our prototype.

I’ve also tried changing the FMOD channel priority with the Set Priority to 256 on interacted-with objects, and 128 on ones that aren’t being currently interacted with, after reading these pages. It hasn’t changed anything though so perhaps I’ve misunderstood it. The attached image shows what I tried. The bottom Set Property node is definitely called and the value is set correctly.

I think I may have also found a bug, where the gain plugins that I have included on every track to be automated by the event’s “strength” on being interacted with continue to affect the culling system, even when bypassed. If I bypass every gain plugin with this automation, the event gets cut off when it is interacted with for a short while – this is the behaviour I don’t want but can’t avoid at the moment. When the gain plugins are deleted however, this culling does not happen, and the sound remains. It seems that the gain plugins, although bypassed, are taken into account when calculating the audibility of the event. It took me quite a while to realise that this was happening, so I can only assume it’s a bug because it is very counterintuitive.

Hopefully this is clearer, and apologies if it sounds a bit vague and confusing at the moment!

Yes, it’s possible. There’s a couple of ways in which you could do it.

As described in our documentation on the topic, the audibility of an event is calculated based only on the content of its master track. As such, if you move your non-distance-based effects to other tracks of the event (perhaps by creating a new audio track, re-routing your other audio tracks into the new track, and then moving any non-distance-automated gain effects from the event’s master track onto the new track so that it acts as a pseudo-master track that creates a submix from all the other tracks’ outputs), the event’s audibility calculation will only take into account distance-based attenuation, and will ignore gain effects automated based on other properties.

Alternatively, you could change the event’s stealing mode from “virtualize” to “furthest.” However, while this method may sound easier to implement, it would also cause excess event instances to be culled rather than virtualized, meaning that they could not come back later without new event instances being started by your game’s code.

Channel-level virtualization is distinct from event-level virtualization. Both result in channels being virtualized, but channel-level virtualization occurs based only on the audibility and priority of individual channels, while event-level virtualization occurs based only on the audibility of event master tracks (or more accurately, the channel groups corresponding to the master tracks of event instances). Channel priority affects only the calculated audibility of channels, and has no impact on the calculated audibility of event instances, and so is not useful in this case.

This is not a bug, but rather the intended behaviour of effect bypass.

Effect bypass is an auditioning tool, and has no impact on in-game behavior when live update is not connected. It is designed to support easy A/B testing of how an effect alters a signal’s audible qualities; having it affect whether a event’s signal was audible at all would prevent if from being used for that purpose and so would be counter-productive.

Thanks for the suggestion - I’ll try making my own “master” track and see how that helps. Thanks for clearing up the priority thing too.

I think I understand now what you mean about the effect bypass not affecting the audibility calculation - for example, a far away sound with a spatialiser turned on and off may be culled if it’s too far away, so toggling it back off to make it 2d wouldn’t work be possible. If this is right, thanks for explaining.

I’m trying out your suggestion now, and it works perfectly. Thanks very much!