I’m currently working on a DJ application in Unity, which uses the FMOD low level api, and I need to be able to output the same audio to 2 different devices / drivers. One output will be the master output and the other the PFL / cue output for the headphones.
I know I’ll need to create 2 separate low level systems to be able to achieve this, and that sounds loaded in 1 system can’t be shared with the other system, so I was wondering what the best way is to have both systems output the same audio.
What I wanted to try first is to create 2 custom DSPs, one to read the output from one system, and the second one to copy and write that output to the second system. This will off course cause the second system to have a slight delay, so I was wondering if there’s a better approach.
For example: Is it possible to create a DSPConnection between 2 DSP’s in separate low level systems?
Quick answer, no you can’t connect 2 systems with a DSPConnection.
If you output to 2 different devices, there may be a synchronization issue. If their clocks are slightly different (ie they use different timing crystals or something) the 2 positions will drift apart, ie 48000.000hz vs 48000.001hz
This means the copying between 2 DSPs idea would be better but you’d have to try and lock step them together.
You could try something like FMOD_INIT_MIX_FROM_UPDATE, and from the look of the documentation you cant use WASAPI so would have to switch to FMOD_OUTPUTTYPE_DSOUND.
I would think this would just allow you to do systemA->update and systemB->update together without worrying about timing issues. (so there shouldnt be a delay if you copy from systemA DSP to somewhere, and then somewhere to systemB DSP)
I’m currently reading the data using a custom DSP, and then writing it to the other low level system by streaming it to a sound using the PCM read callback. This adds quite a bit of delay, depending on the DSP buffer size in the first system and the encode buffer size of the sounds on the second system, but at least it works.
Using some form of lock step to sync them might work, but would add extra latency to first system.
I first tried to use another custom DSP for reading the data, but it wouldn’t work without it having an input connected to it. Is there a way to have a DSP output sound without having any inputs connected to it?
The lowest latency way is to always use DSPs. There is no way to have an isolated DSP run by itself, but you can simply mute its output by just returning FMOD_ERR_DSP_SILENCE from the process/read callback, to make that branch go idle.