Separate sounds for Multiple listener(not split-screen)

Hi to all!

Can you get advise, how added different and isolated sounds for 2 listeners?

I working on a old Resident Evil style game with static cameras on rooms and etc.
And I want to create a system in which the cameras that looks at the player hears his steps, and the listener on the player hears the sounds of the environment.
That is, I need to separate the sound groups and isolate them from different listeners, but I still do not quite understand how to do this both in fmod and in unity by editing the listener script.


Hum, that’s interesting. So you’d like the environment sounds to be heard by a listener placed on the player, and the player sounds (footsteps) to be heard by a listener placed on the camera, isn’t it? Unfortunately, it doesn’t seem possible to mute some events or mixer groups on a specific listener ; but I’d also like to know if there’s a workaround.
It may be possible to achieve this by faking the location of the player footsteps, though…

Yep, correct!

I thinking about Tags and editing listener script, maybe it possible make the listener on the camera react only to the sound coming from the player using attenuation or events from animations.

I watched the report on the GDC from the ItTakesTwo developers (although they used Vwayz) and so they separated the sounds for the players on a split screen, it seemed interesting to me, so I’m trying to repeat this in my project.

You can set an event instance to only be spatialized by certain listeners with Studio::EventInstance::setListenerMask.

That being said, there might be a better way to achieve the behavior you want… Depending on what that behavior is, of course.

Before I can talk about other ways of doing things, though, I need to talk a little about how listeners work.

It’s easy to think of a listener as a kind of “virtual microphone” in the game world, but that’s actually not what they are: They don’t pick up audio signals. In fact, they can’t process audio at all. It may seem counter-intuitive, but Listeners don’t actually listen.

Instead, a listener is just a set of co-ordinates that describes a point in space. The FMOD Engine passes these co-ordinates to each event instance in your game, and each event instance uses the co-ordinates to calculate how its effects and parameters should affect that instance’s output.

So, now that we’re on the same page… What do you actually want to achieve by having different events use different listeners?

I ask because there’s two different points in space you want to be important (the avatar’s head and the camera), and two different processes that are involved in spatialization (attenuation and panning). While it’s occasionally useful to use different attenuation for different events, it’s extremely rare to want to use different panning, especially when the positions used to panning are in close proximity.