Spatializer, emitter's orientation, and stereo sources


For a tactical 2D game (top-down perspective), I was wondering about the influence of emitter’s orientation.
Despite this post:

I found, playing with the sandbox, that:

  • for a mono source it doesn’t change anything.
  • for a stereo source it impacts the way left and right sources are mixed.
    In the stereo case, if the emitter is looking in the listener’s direction (or opposite), the stereo spread is conserved (apart the distance effect on the stereo spread), but is “monofied” if the emitter is looking in a perpendicular direction, which is kind of problematic on the paper (even though not often perceptible on average).

For this kind of 2D game, it seem to me the emitter orientation is irrelevant, but we would like the stereo spread of the source being conserved in all cases. This could be achieved by getting the listener’s coordinates and orienting the emitter in that direction, updated at each frame. Is that the way to go, isn’t it a bit “heavy”? Is there another way to achieve this?

Shouldn’t we have the option to have the emitter always looking in the listener’s direction? The way FMOD actually behave makes the hypothesis that a source recorded in front have a stereo spread but hasn’t if recorded from the side. I tend to think perhaps the opposite: a recording from the side would probably have more stereo spread!

At this stage you will need to orient the Events to match the listener to achieve the result you are after. I agree there should be an option to control this and you’re not the first to request it. I’ll raise the priority of getting that implemented however it’s not currently scheduled so using the workaround is recommended.

Thanks for your answer.
I just found on the doc:


When the spread angle is 0 (default) a multi-channel signal will collapse to mono and be spatialized to a single point based on ChannelControl::set3DAttributes calculations.

However, my tests in the sandbox shows it’s not the true: a stereo file does not collapse to mono.
So we have the choice between:

  • The doc is wrong about 0 being the default value.
  • The sandbox doesn’t use default set3DSpread value.

ChannelControl and it’s related functions are concerned only with the Core API, spatialization within FMOD Studio is instead performed by the FMOD Spatializer DSP.

1 Like

One way to achieve this now, before the feature is implemented, is to use “billboard panning”, ie set the Spatializer to 100% mix override with L/R Stereo, and automate the surround axis and width by Direction and Distance.


1 Like

Thanks for the trick, I’ll try that!

In fact, I don’t see the point in the “event orientation”. In which case would someone want to tweak the stereo diffusion of an object, based on the orientation of this object: mono in some directions, stereo in some others? Could someone give me some exemples? Does the event orientation anything else than that?
The stereo->mono transformation should be reserved for when the event moves away from the listener, not the the event turning in place around himself, in my opinion.

I think what you are seeing is a slightly degenerate case of multichannel rotation. Instead of thinking of stereo, consider 5.1, both source and target. The 5.1 source is a sound field and when the listener is in it the event rotation moves the source channels around the listener channels. Then as the event moves away the 5.1 source signals get collapsed to the listener arc more and more, yet still respect the rotation of the Event.

For a stereo source this can work okay for a 5.1 output to achieve a similar result, provided that’s your intention for the source. However I agree that having a mode that prevents that rotation where the stereo source is simply authored as stereo to give a sense of space would be good.

Thanks for your answer. But I struggle to find useful the concept of “5.1 source”, or more generally the concept of non-point source (the “sound size” being more like a convenient trick for smoothing the left-right transition than a real physical trait). I mean, if a source has a sense of space and emits meaningfully different informations on different channels, the developers would probably treat each piece of sound as a separate event, the surround field being created by the combination of all those point sources.
This is only my feelings, but I may of course be wrong: I would be glad to hear about exemples of 5.1 sources that meaningfully benefits from being treated that way.

They’re mostly used for background ambiances in which the origins of specific noises are deliberately unclear, as being able to pinpoint the origin of every creak and rattle would undermine the atmosphere the game’s developers are trying to create. Having such sounds change in response to the rotation of the listener helps give the impression that such sounds originate in the environment rather than from points that are attached to and move with the camera. This kind of ambiance is most common in horror games, and sees occasional use in other genres.

You’re right that if a sound designer wants specific sounds to have clearly discernible origins, they would be better off using point source emitters.

1 Like