Hi everyone,
I’m working on a project in Unity in which several audios are being played from different sources, such as music/audio files and input audio from microphones. We want to be able to adjust these sounds outside of Unity in realtime (for example, make one louder without affecting the others), while trying to keep the latency low and the sounds synced (priority on the latency).
What I was thinking is to take the needed sounds, join them into a single multi-channel sound, and then play that on the external device. Any suggestions on how to achieve this? Or some other recommendations?
I’m not very familiar with FMOD. So far, I have been able to read a multi-channel wav file and play it on an output device using the core audio API. If I then try to play several sounds they get merged in the channels, which seems reasonable behavior but I would like to have them separated.
Looking through the forums I found this:
https://qa.fmod.com/t/how-can-i-make-an-wav-file-with-multichannel-ouputs/9238
To create a multichannel file you need to get the raw PCM data from the FMOD::Sound objects and interleave that into a single buffer then write that to file.
This seems close to what I want to do, but then the post goes into another direction and I’m still a bit lost on how to implement this, or if it’s really what I should be doing.
Any help is appreciated.