I’m not sure if this is a bug or user error (most likely, it’s me).
I was using setSpeakerPosition() to specify the physical angles of the actual speaker layout. Just to make sure it worked, I used setSpeakerPosition() to swap the right and left speakers. I played a 3D sound “from the right” and it now came out of the left speaker. Cool.
Later on, I was using a 3D Event, moving it around using event->set3DAttributes(), which works as expected; the event sound moves where I put it.
But I then tried the “speaker swap” again, and nothing happened. The 3D event played from the right came from the right speaker regardless of setSpeakerPosition(). Non-Event sounds “swapped” when I used setSpeakerPosition(), but not 3D Event sounds.
I then changed how I set the 3D Event position by grabbing its ChannelGroup and using group->set3DAttributes(). This worked - a non-event sound AND an event sound played “from my right” came out of the left speaker when swapped, and out of the right speaker when not swapped.
I assume that event->set3DAttributes() is the “correct” way of moving an Event around (especially since it will, I assume, use the front & up vectors to also aim the sound cone if used). However, it appears that event->set3DAttributes() doesn’t pick up on speaker locations that are changed with setSpeakerPosition. The event’s ChannelGroup, however, does appear to take the setSpeakerPosition changes into account.
Am I misunderstanding these two calls?
Thanks!
setSpeakerPosition only effects the Core API and not Studio, so your understanding is correct but our documentation could convey this better.
I have added a task to improve the documentation of setSpeakerPosition to clarify this.
Thanks for the update, Cameron - much appreciated.
Just to follow-up and make sure things are understood (hopefully this is something that
will be helpful to others to know):
-
If 3D pos/vel is set via Studio calls, the 3D positioning is done using default speaker
layouts (i.e., a sound in front of us to the right will come out the “front right” speaker
output regardless of where that speaker is positioned via setSpeakerPosition() calls.
-
However, if the 3D pos/vel is set via Core API calls, the 3D positioning will take the
setSpeakerPosition() locations/vectors into account when mixing.
Do I have that right?
And if so, are these two methods more or less functionally equivalent? Meaning that one
can set 3D position of an event via Studio or the Core ChannelGroup calls safely and it
is expected that either will perform that function (FWIW, this has been my experience,
both appear to work fine, it’s just that using the ChannelGroup to position the event will
take setSpeakerPosition() into account).
Thanks again!
Generally any digging/overwriting Event data with the Core API is advised against, as it can cause unexpected results and/or problems.
You would have to remove any 3d panner from the Event, and call setMode
on the ChannelGroup
, but its not guaranteed FMOD Studio API won’t just stomp over the data.
Is there something in particular you are wanting to use this for? We might be able to help find an alternative.
Cameron,
What I’ve done is to abstract the 3D positioning calls for Sampled Sounds and Events such that the main app doesn’t care what they are under the covers, it can just position them.
In addition, in our application, we have a non-standard speaker layout (for example, the side surround speakers are located at specific angles to the listener (right is right and forward; left is left and aft). The SetSpeakerPosition calls work very nicely to locate these speakers at their actual positions such that 3D positioning works smoothly around the listener when positioning 3D sampled sounds.
It was our intention that we’d simply use the same set3DPosition call to place event sounds in the same audio space (i.e., adhering to the unique setSpeakerPosition layout). As the overload is virtual, we can position a sound in 3D space and let the overload determine if it’s calling the sound/channel 3D position method or the Event/ChannelGroup 3D position method.
That’s when we found what appeared to be the Event 3D position spatializer not being aware of the audio layout set by SetSpeakerPosition.
Hopefully, there’s a better way of going about this, as your comment about not poking around in the Core API for Studio-based sounds makes sense.
Regards,
Bill
At this point it doesn’t look like there is another way to do what you want unfortunately.
I have created a task for a feature request, to allow setSpeakerPosition to affect the Studio Spatializer but I don’t have an ETA for it at this time.
This would be VERY handy. Then we can simply do panning for headphones in code rather than muck around with spacialisers in every event… which does seem a tad backwards to me.
Headphone mode in Wwise is very simply to do.
How about this problem?Have you found a solution? I encountered the same problem, using studio API20009. Due to the space limitation, I couldn’t place the speaker according to the standard 7.1, and setspeakerposition didn’t work.
We don’t currently have an ETA for this but I will see if I can bump the priority for you.