Hey there - spotted this and couldn’t help but chime in…
I have done this sort of thing in the past, and whilst it will give you the kind of flexibility you are after (ducking etc at the individual instance level, in all the wonderful fancy ways that FMOD allows), I don’t necessarily recommend it unless you absolutely have to.
I promise that you will get better results if you do what @alexzzen hints at and manage the individual sounds through some ‘robot audio handler’ that you write yourself. Big complex FMOD Events can be very powerful, but you can also get yourself into all sorts of tangles, and there are some limitations.
In particular, you will find (but please test this yourself) that triggering an instrument clip via a parameter change will have audibly more latency than creating and starting a dedicated event for that sound! And latency is really something you want to avoid, if at all possible.
Also, let’s suppose you end up with a step and a pain sound for your robot. Later, you decide you want the pain to have voice stealing pririty over the step (seems pretty likely). You won’t be able to utilize the event level stealing behaviour because they are in the same event!
I do recommend complex master events for certain things, like smooth loop layers interacting with each other, and of course music and ambience beds and so on, but my professional opinion is that you end up with something much more robust by using a careful combination of ‘complex multi-sound events’ and ‘dedicated single purpose events’, all managed by a ‘master’ script (replacing the Wwise ‘object’ that we lack)
My general rule (which I don’t always follow but inevitably regret) is this: if something can be easily and painlessly extrapolated out to the code level, without significantly harming your ability to tweak things at runtime with Live Update etc, then you should absolutely consider doing it.
But I’m just one soundie in a sea of soundies, you do you!