Too short trigger region -> fail to play (latency?)

I ran into a recent case with vehicle collision sounds in our game: they were not playing at all, even though the “collision strength” parameter was being passed.

The events have an embedded event sound, which contains a blend of three samples crossfaded along the strength parameter. This embedded event is automated by the parent event calling it. Why use an embedded event? Because if you place a crossfade blend of oneshot sounds on a parameter of the main event, the sound will never stop playing. After placing them into an event sound, and this event sound on a timeline, the event stops itself after the oneshot(s) have finished.

In any case, these sounds used to play in the game earlier, something was changed. Maybe the parameter got passed one frame later, or some such.

In any case, I could fix the issue by making the event sound’s trigger region longer. It seemed like a latency issue: if the trigger region ends before the latency is passed, the sounds failed to play.

A more optimal way to play the event would be to have all the three sample multisounds on their own layers and use volume automation for each to do the blending. But it overall seems to render the usefulness and purpose of the embedded sounds and events moot: if they all cause latency problems, they shouldn’t be used unless latency is not an issue for that particular sound.

Hi Peter,

Sorry this one seemed to have slipped by. This sounds like it might be a case where when the child event is started the parameter cursor is not overlapping any sounds, then the parent timeline cursor leaves the event sound region causing the child to stop, then the parameter is set on the child to try to make it play sound but it has no effect. Does that sound correct?

There shouldn’t be any latency added by using the embedded sounds.