Splitscreen vehicle audio phasing

Hi,
I’m working on a sandbox multiplayer game which has a GTA style vehicle system where you can drive any vehicles in the game. For splitscreen we have created two listeners and layer masked the vehicles audio between them so two events play for each vehicle.

This works well the majority of the time but if a player sits with a car idling and the second splitscreen player leaves the audible range of their vehicle, when the second player returns the two idle sounds for the vehicle are usually out of sync. This get fixed when the vehicle starts to be driven when new samples are introduced.

Does anyone have any idea how we can ensure the two sample sync up? We’ve tried syncing the event timelines in code but it didn’t help, the samples were still slightly out.

Thanks in advance!

It sounds like you’re using two separate instances of each event, one for each listener.

We recommend instead creating only a single instance of the event for both listeners. As described in the Studio Panning for Multiple Listeners section of the Studio API 3D Events white paper in the FMOD API User Manual, when a single event is audible to multiple listeners, its attenuation due to distance is based on the closest listener, and its panning and other 3D attribute-dependent properties are based on its position relative to all the listeners to which it is audible, weighted by distance.

This method avoids all phasing and synchronization issues, consumes fewer resources, allows event instances to seamlessly transition to being audible to more or fewer listeners, and is difficult for humans to distinguish from actually having the event playing in two separate nearby positions at the same time.

Thanks Joseph, seems like we’ve been over complicating it a bit.

We have a raycast occlusion system which works on the buildings layer in Unity. Currently the raycast casts to both players in splitscreen which I think led to the vehicle always being occluded when one player was not in line of sight of the vehicle. It was a few months ago but I think it was this that led us into using two instances in the first place. Can you recommend any good options for dealing with this using FMOD?

i don’t know exactly how you’re using occlusion, but since the only information the FMOD Engine has about whether it should occlude an event instance is the information your code gives it, I think any solution will have to involve changing your code such that it no longer tells the FMOD Engine to occlude an event instance that is only occluded from the perspective of one listener.

Perhaps you should look into altering the way your raycast system reports occlusion, such that the extent to which an emitter is occluded is the average of the occlusion values it should have relative to both listeners, weighted by their relative distances from the emitter?

Thanks Joseph that’s really helpful! Will give that a go with my programmer.