FMod Feature Request - inbuilt Meta support

It would be great if FMod incorporated support for Meta alongside resonance. Meta Quest headsets are fairly dominant in the commercial VR space and the apparently straightforward set up outlined for FMod and Meta on the Meta Developer site is anything but. Even when followed to the letter problems abound, and there is no real support from Meta. As a result, our technical lead has been urging us to abandon FMod and simply use Unity’s built in audio - which appears far less problematic. As the person responsible for audio design, I don’t want to go down the Unity inbuilt audio route as we’d lose critical functionality which would impact audio design and interaction.

If the outcome of FMod not providing inbuilt support for Meta Quest results in developers abandoning FMod - surely that is something you would wish to address? I use FMod as our preferred audio middleware tool - I think the software is great, but the problems we’ve encountered since developing for Meta Quest 3 in Unity using FMod Studio have resulted in questions being asked about continuing with FMod, particularly as the team do not want to use the Resonance plugins. Any chance of Meta features being added in the immediate future?

Sorry to hear you’re having a bad time. Is this the guide you’re trying to follow?

https://developers.meta.com/horizon/documentation/unity/meta-xr-audio-sdk-fmod-req-setup/

At what step are you running into problems? If we can fix it or iron out any confusing/incomplete info here then I’m sure we could convince Meta to update the doc on their site.

Hello Andrew,

Apologies for the delay replying to you – I have been away on leave. Our Unity tech lead is still on leave, but I have emailed him asking for confirmation of the Unity/ FMod version and plug-ins used. As soon as he gets back to me, I will pass on the information and the build log – we would be very appreciative of any help and advice. We have always found FMod support services to be excellent – in marked contrast to Meta!

The page you mention was one of the ones we looked at. We went through all the pages relating to FMod and Meta and followed the guidance to the letter, although the guidance was not always clear, and the embedded and disjointed links didn’t always clarify matters because they linked to generic content rather than Meta Quest specific guidance. The support resources seem to suggest that with a few plug-ins in FMod and a few set-up tweaks in Unity audio can be spatialised for the Meta headsets, but in reality implementing FMod, particularly with the Meta audio library, has been highly problematic. Our tech lead found implementing audio without FMod much more straightforward and found that the room geometry etc worked well – hence the pressure to drop FMod….which isn’t something I want to do as the audio lead.

When using FMod, if we don’t use the Meta audio library facility for room acoustic modelling, the headset and desktop builds generally work (although there have been challenges getting audio playback in some cases), but the sound from the emitters isn’t contained within the geometry of the spaces. If we try to use the Meta audio library within Unity, we can get the desktop version to model the sounds correctly, but there’s no audio from the mobile build.

Currently the example linked from the support pages is the Go Karting example, which is a standard FMod/ Unity implementation. What would be helpful would be a link to an example showing the step-by-step stages for a dedicated Meta Quest 3 Android Unity build with screen shots at each stage and detailing troubleshooting steps for common issues – such as no audio output. It wouldn’t have to be complicated (not a full game) just showing three different sound event examples, such as one mono positional sound emitter, one ambisonics environment (3D), one non-ambisonic environment (3D). It would be good to include use of the Meta audio library for acoustic modelling and raytracing as part of the example.

The reason I mention the need for guidance regarding non-ambisonic environments is that the Meta plug-ins don’t really account for this – environments wouldn’t be modelled using the positional soundsource plug-in and people might not want to use ambisonics for environments either – so the meta ambi plug-in wouldn’t be used. That leaves one option really – using the FMod spatialiser for non-ambisonic environments, so some guidance regarding what to do for environmental events that don’t use the Meta plug-ins would be helpful to ensure correct spatialisation for the Meta Quest 3.

We’ve also had issues sometimes when validating the FMod projects which sometimes flag the Meta plug-ins as unexpected – not sure if this is a version issue, but it sometimes happens when there’s nothing wrong with the FMod set-up. I do think it would be helpful if the Meta plug-ins could be supported by FMod as the Resonance plug-ins are (built into the software) – particularly given the place of Meta headsets within the market place.

Many thanks for your help,

Helen

The situations with Resonance and Meta are very different, in the case of Resonance, the developers (Google) open sourced the product and essentially abandoned it. Seeing value in the product (and having the license to do so) we adopted it and keep it running. In the case of Meta, they are actively maintaining a closed source product, making it very difficult for us to take any meaningful ownership.

As Andrew said though, while not our product, we can provide some degree of support on the technical side of getting it working. When your tech lead gets back, please get them to detail the problems encountered so we can try to help. Also, they have their own support forum for anything beyond our ability to assist with.

For non-ambisonic sources, I believe Meta intends you to use their source plugin, which will take the 3D information from the Event and process it appropriately.

I hope we can help you sort this out, looking forward to hearing back from your tech lead.

Thank you for the information Mathew – I understand the situation with Meta now. I will send through the implementation information once I have it, so we appreciate your support with this.

The sound source plugin wouldn’t work well for environments because the input to the plugin has to be mono, which would surely undermine any spatialisation set up in the FMod event or pre-baked? Ideally it would be good if Meta offered a stereo or multi-channel input spatialiser (rather like the FMod spatialiser) more suited to 3D environments but with a binaural output (and not requiring ambisonics tracks).

Regards,

Helen.

Thanks for the clarification on why the sound source plugin isn’t appropriate, I missed that you were discussing ambiences, as in multichannel signals. It sounds like what you want is a multichannel → binaural plugin provided by Meta, however they don’t offer such a thing.

My guess is they want you to provide ambiences as ambisonics since multichannel doesn’t accurately convey sound above and below. Processing your ambiences to ambisonics offline might be an option, that would still allow you to rotate the ambience to account for head movement. If rotation isn’t required, then processing the ambiences offline to binaural and playing them without spatialization is the other option.

The only other option I can think of would be to split the ambience out to several mono files and position them in the world, moving with the listener.

Thanks Mathew,

Yes, I would like a plugin like that from Meta – even a 2 channel input for ambiences without ambisonics would be good. Currently the best method we’ve found is to process the audio binaurally (prebaked) using Dear VR Pro or to use Reaper and the IEM plugins (my preferred method - prebaked) to create 4 channel ambisonic files, using the ambisonics Meta plugin on the ambisonics track in the event and having other tracks with the FMod scatterer for randomised spots. On the Master, we’ve been using the FMod spatialiser and the distance parameters in the event macros. We then position emitters in the scene using the Meta audiosource – so it’s a combination of the approaches you recommend that we’ve adopted. That seems to work best on the headset – that is, until we start to use the Meta audio library in Unity and then the audio seems to stop working (Android build).

One thing we’ve found so far using the Meta audio library is that when using the FMod spatialiser for environments, the distance override doesn’t seem to work….the only distance parameters that Unity seems to pick up when the Meta library is used is the one found in the event macros, so we set the distance override to off.

Regards,

Helen

For your Android issue the logs would be helpful. It sounds like you might have missed adding the Meta dynamic plugin for the Android platform, or perhaps you haven’t told Unity to include the plugin file with the build (for the appropriate architecture). We have some information about that in the plugins section of our Unity documentation.

Regarding min/max distance, I’m not sure I understand your questions, but I can explain how it all works. The Event has a min/max distance value specified in the Event macros. A spatializer (FMOD or otherwise) will read that value to perform its distance attenuation. You can override the min/max specified in the Event macros at runtime with the API (or the Unity attenuation override). If you enable the spatializer’s own attenuation override, you are telling it to ignore the Event macros values, ignore any API / Unity values and use the values specified in the spatializer override. These values can be automated like any other if you want control at runtime.

Hello Mathew,

We did include the dynamic plugin for Android – but I will try to get the logs for you.

Regards,

Helen