Hi. I have a crowd that says specific words. Behind the “specific word crowd”, I have a layer of generic crowd (almost noise like) playing a the same time. In order for the “specific word crowd” to blend in more, I would like to have the “generic wordless crowd” follow the volume envelope of the “specific words crowd”.
I can achieve the opposite via sidechain and compression, but I would like to get expanding behaviour instead.
The reason I don’t do parameter modulation (volume) via the sidechain is that it’s resolution is too low and I can hear volume jumps.
Any good ideas?
Thanks
EDIT: I ended up doing the parameter modulation via sidechain. Basically modulating a parameter by the volume of the speak crowd track and then getting a nested event of the generic crowd and controlling the volume of that by the crowd.
If i fiddled around enough, I could get by with no volume artifacts.
I do wish FMOD was running as fast as unreal metasounds though. When doing fast pitch and volume change you can clearly hear steps. Not the case with metasounds.
Cheers