Virtual voice management on Unity/Switch

I’m developing a Game on Nintendo Switch using Unity and FMOD, featuring a streaming scene loading system, so the player never experiences load pause/black screens.
The system is pretty solid on Switch.

Now, in order to better manage audio, I’m creating an object that persists thru scenes, to play some 3D ambiences that always remain in play across different scenes, because I prefer never stop certain sources in order to maintain environmental sound choerence. In order to optimize voice allocation (and concurrent sound streaming) I’m providing some snapshots that can “close” (set volume to 0 using groups or VCAs) unwanted persistent ambiences for specific scenes and saving audio resources on Switch platform.

My question is:
is this approach safe enough on Switch from a “virtualization” point of view?
I mean: If some sound sources are muted by a snapshot, do they free resources (and stream channels) thanks the the FMOD optimization, or should I be concerned about 3-4 concurrent strems playing even if their output level null ?

Watching profiler it looks like CPU streaming, File I/O and Voices (Total) are showing benefits by muting sources volume or by reducing their 3D max distance param.

Can you please confirm that this design will be good to balance voice allocation and sound streaming resources on the final build running on Nintendo Switch?


Assuming you haven’t set a priority of 0 for any of these events then they should virtualize, which will free the resources and only update playback position. There is a very small amount of overhead that won’t cause issues with very large amounts of concurrent virtual streams playing, and this applies to Switch as well.
I think this approach is good; if you don’t care about playback position being preserved then you can instead stop these environmental sounds.