I’m transitioning an existing project to use the new Google VR spatialisation plugin. Prior to now, we’ve ducked gameplay sounds and music when dialogue plays by sidechaining from a “Dialogue” group, connected to compressors on “Gameplay” and “Music” groups.
I see from Joseph’s answer about GVR and routing structure that the GVR listener bypasses the normal group mixing and routing.
Has anyone had success with a workaround for treating sounds as groups in GVR? I got as far as this: I can do a pre-panner send of all dialogue to a ‘signal analysis’ group which contains a sidechain—but is itself inside a muted group so no extra sound is output. However, that still leaves the question of how to duck gameplay sounds based on the sidechain signal. Even if I were to put compressor effects on every single gameplay event (yuck), I’m not able to connect those to the sidechain on the dialogue analysis group.
I’m stumped! Any advice would be most appreciated.