Hello. I’ve recently started using FMOD with Unity and would be very happy about it if not the latency. Basically any event starts 100-200ms later then it is expected. I tried it in an empty project with just one GameObject having FMODEventEmitter to confirm this. This became crucial as I started working with feedback-dependent actions such as shooting/moving.
Here are some of the settings I have, I don’t knows what can be usefull, but ready to provide anything. I’m using Unity 2020, FMOD Studio 2.00.08, Windows 10.
I tried switching between WASAPI and ASIO but no difference. I doubt it is a hardware issue since I have very low latency in various DAW. Searching through dozens of forums posts and API didn’t help me, unfortunately.
I’ve done some additional testing. I ran at one trigger an FMOD event and Unity Audio Event with the same sound and latency was the same, just a couple ms difference resulting in chorus effect. But when I added extra sounds to FMOD event it became obvious that it had much more latency. More audio clips in FMOD event (and bigger they are) — more latency (it stacks really fast). 2 secs long 500kb file added about 30ms maybe? And the same loop but longer (5mb) gave about 100+ ms latency.
For me it doesn’t make much sense… Is it a bug or just something unavoidable?
How are you loading your banks and event sampledata prior to playing event instances?
joseph, if I understand API correctly, they load when I create an Instance in script or via FMOD Event Emitter Component (last screenshot). To reassure I tried to use [Studio::EventDescription::loadSampleData] at the start of the scene but it didn’t affect latency.
It might be something on the technical side, like loading or choice of drivers.
But you are also using the alpha version of Unity, right? All v2020 are alphas I think. Not for production pipelines.
For any real development, I would suggest you stick with the LTS releases, just to be sure. It saved me a lot of trouble over the years, and each time I tried new things with alphas or betas, one or two strange behaviors would pop-up.
I think the latest LTS is Unity 2018.4.22f1, find which Fmod is best paired with this version and setup your development there.
In case you aren’t aware, here’s what LTS are: https://blogs.unity3d.com/2018/04/09/new-plans-for-unity-releases-introducing-the-tech-and-long-term-support-lts-streams/
Maybe it’s not an answer that fits your workflow or scope, but it’s my 2 cents from a long career on game audio.
Panagiotis_Kouvelis, thanks, I guess you are right about using LTS for serious projects. It just seems weird that latency in my situations depends on the audio size despite the fact that I load sample data in advance.
It is weird, I agree. Actually the weirdness made me think of the LTS pipeline.
I have a question. Are you testing the latency with both Unity and Fmod editors open? And with live connection between them for logging?
If yes, have you tried building the app and running it to test latency with visual playback event markers and a good screen recorder?
If you do that, be careful of video files, they throw audiovisual sync off sometimes.
Or you can try two audio files that you trigger at the same frame. One with an impulse and 2 seconds of silence, and one with an impulse 2 seconds of silence and then some more content to make it larger than the first one. Playback the one on the left channel and the other on the right, and then record the output of the executable. The impulses should fall on the same time. You can also add a noise loop and a copy of that noise loop on a separate file, then do the same and record in left-right again, and then put the two files in a sequencer, and reverse the phase of one of the channels. When you sum them in mono, you should hear nothing.
@kiberptah you could also try lowering the DSP Buffer Size in Unity’s Audio settings. I believe those affect the latency of all the audio from the game, and thus also FMOD.
Tell me if it helps!
Good idea, but keep in mind that as you lowering latency (buffer size), you increase processing load, thus allowing less processing of audio to happen. As those are project settings, I’m assuming that this will also reflect on your players’ machines too? So you have to balance exercising prudence.