Relationship between `ChannelControl::set3DAttributes` and `FMOD_DSP_PAN_3D_POSITION` on a `Pan` DSP?

I’m trying to understand the low-level spatialization system, and have a couple questions.

  1. The docs say that a Channel or ChannelGroup start out with a Fader DSP. What happens when you call set3DAttributes to place the channel in 3D space? Is the Pan DSP used here at all? How does it know what Listener position(s) to spatialize relative to?
  2. An alternative way to spatialize a channel seems like you could add a Pan DSP and set the FMOD_DSP_PAN_3D_POSITION property, which makes the source->listener spatial relationship explicit. Does this basically do the same thing?
  3. How does occlusion and the 3D cone factor in? I’d like to be able to create a directional source, will it work to do that at the Channel level, even if I’m using the Pan DSP for most of the spatialization?

I’m working on a project that requires audio sources to be spatialized to multiple listeners independently (without the default averaging behavior with multiple listeners). The idea is to generate a separate stereo stream for each listener. It seems like option #2 above should do what I want, but I want to understand what’s happening with option #1, to make sure that I’m not creating some weird conflicts or double-spatialization.

I’m starting out trying to do this entirely through the Core API, and then I figure I can start adding Studio features once I have the basics working.

Hi,

The Core API panning system and the Pan DSP are fundamentally two separate systems. The Core API panning system uses its own internal panning logic, which it applies directly to the mix matrix of one of a given Channel or ChannelGroup’s DSP connections. Features of the occlusion system like 3D cone settings are handled at the same level as the Core API’s panning system. The Pan DSP is what is primarily used for spatialization in Studio - it’s a single DSP that is placed in the DSP chain, and outputs a panned version of the input signal based on its parameters. Both the Core API panning system and the Pan DSP use the System’s Listener 3D attributes to determine listener position for the purposes of spatialization.

It should be possible to use some combination of mix matrix settings or Channel Mix DSPs to split your individual sounds into their own channels for separate listeners, but fundamentally both the Core API panning system and the Pan DSP will average between listeners when spatializiing. The Studio API, on the other hand, features the ability to set a Listener mask with EventInstance::setListenerMask, which will allow you to spatialize an event instance based on an arbitrary selection of Listeners. However, depending on the complexity and expected overhead of your use-case, running a number of simultaneous FMOD systems with individual listeners is also an option.

Thank you, that’s super helpful.

fundamentally both the Core API panning system and the Pan DSP will average between listeners when spatializiing

My plan was not to create any FMOD listeners at all, and just rely on manually setting the FMOD_DSP_PAN_3D_POSITION property of the Pan DSP, whenever any of my listening or source locations change. Would that work, or will the system try to update the panner behind the scenes?

The plan is to have N sources, and M custom listeners (that are doing the channel merging/routing), and a grid of M*N Pan DSPs in the middle that are doing all the spatialization.

No problem!

The system shouldn’t update the Pan DSP behind the scenes. Additionally, if you don’t create a sound with the FMOD_MODE FMOD_3D, the Core panning system won’t act on the sound or channels it is playing on at all.