Edit: I got it to work, but it seems like the sound only becomes spread out once the listener isn’t standing right on top of it. Is there any way to retain the stereo image even when the sound is right in the center?
I am trying to implement FMOD into my game, and I ran into an issue with playing stereo 3d sounds.
The documentation says that playing stereo sounds results in the separate channels being played as two separate voices, and that the width can be adjusted with FMOD_Channel_Set3DSpread.
What I’ve tried: Creating a new sample with a stereo file, playing it with paused set to true, calling FMOD_Channel_Set3DSpread(channel, 360), but the sound still gets collapsed to a mono mix of left+right instead of being spaced out.
I have tried different angle values, none of them sound much different.
Am I supposed to somehow retrieve the sub-voices and spread them out individually?
I’m using the core API from C.
Thank you for any help.
What version of FMOD are you using? Would it be possible to get a code snippet uploaded to look over?
Thank you for responding, and my apologies for the delayed reply.
My header file says:
#define FMOD_VERSION 0x00020212 /* 0xaaaabbcc → aaaa = product version, bb = major version, cc = minor version.*/
I’m just calling FMOD_System_Create with the version constant, FMOD_System_Init with FMOD_INIT_NORMAL, FMOD_System_CreateSound with the UTF8 string containing the path to the sound, and FMOD_CREATESAMPLE|FMOD_3D|FMOD_LOOP_NORMAL as the mode, and then calling FMOD_System_PlaySound with paused set to true, and then to make the stereo file wide, I call FMOD_Channel_Set3DSpread with 150 degrees (I’ve tried different values, anything above 150 doesn’t sound right to me). I also call FMOD_System_Set3DListenerAttributes and FMOD_System_Update every frame.
The problem I have is that when the listener z is equal to sound z, the sound collapses to mono (as if my 3d spread was set to default/0) when I would prefer it to sound the same (spread out) as if listener z was less than sound’s z.
Another problem is that when listener z is grater than sound’s z, the sound image rotates so that left channel becomes the right one and vice versa, which confuses me because listener is still facing forward.
I hope that makes sense/I’d appreciate any pointers on how to get the behavior I desire.
Sometimes I have a river sound for example that would sound bad being collapsed to mono if the listener is standing inside the river. I can understand it being collapsed to mono if the listener is standing to the left or right of the source, but I’d expect it to slowly start widening as I approach the sound source’s middle, and then slowly start narrowing as I start going toward its left edge, which is the behavior I get if the listener is +z or -z, with the issue that +z reverses the sound channels.
I hope I have understood things correctly: by default (left-handed coordinate system) means that +x=right, -x=left, +y=above, -y=below, +z=in front, -z=behind?
In the engine I have been using so far, y was in front/behind, and z was above/below, so I hope I have understood that correctly.
Thank you for the explanation. I was not able to reproduce the issue in our core examples. Would it be possible to get a stripped-out project where you are experiencing this?