If I have poly wave files split into monos, how can I recombine them virtually to play in sync as if they were the original poly file?
Thanks.
If I have poly wave files split into monos, how can I recombine them virtually to play in sync as if they were the original poly file?
Thanks.
It depends on what you mean by “as if they were the original poly file.”
If you ensure that the instruments containing the audio files are triggered at the same time - for example, by putting them all on separate audio tracks at position 0:00:000 on the timeline so that they all start when the event does - and the assets are not set to stream, the audio files will start playing in perfect sync.
Ensuring that each audio file goes to a specific channel is slightly more complex. You’ll need to ensure each audio track’s output is upmixed and distributed evenly to all channels by right-clicking on the track input meters in the deck and selecting your target format from the context menu; then you’ll need to add an FMOD Channel Mix effect to each audio track that uses the same grouping and use it to silence every channel except the specific one that particular audio file is meant to play through.
The signal from each audio track in the event will be mixed together into the master track, so the event’s output will be of all the audio files playing in sync through the appropriate speakers.
Thanks. To clarify, Im talking about the API only. Using FMOD in an audio app.
you can use Channel::setDelay to schedule all your mono sounds at the same time in the future.
Use the current system DSP clock, then add a bit, ie a few mix blocks worth, so if the mix block size is 1024 , add 2048 or 3072 for example , it depends how slow the code is that is playing the sounds.
Alternatively you can stall the whole mixer with a mutex lock, use System::lockDSP and unlockDSP around your playsound logic.
Revisiting this after a year.
I just tried loading multiple copies of the same audiofile into different sounds, all with the same settings (streaming), adding them all to a channelGroup, and then running this code on play command:
FMOD_System_LockDSP(system);
int k;
FMOD_ChannelGroup_GetNumChannels(channelGroup, &k);
for (int c=0; c<k; c++) {
FMOD_CHANNEL* ch;
FMOD_ChannelGroup_GetChannel(channelGroup, c, &ch);
FMOD_Channel_SetPaused(ch, false);
}
FMOD_System_UnlockDSP(system);
I still hear phasing between the files, suggesting they are not in sync.
What am I doing wrong?
Many thanks.
The phasing effect seems to varying correlating with the channel count. Low channel count wav file sounds very close, higher channel counts go more and more phasey.
Ah I think I solved this… was doing a setposition on the channels with the enclosing in a lockDSP as well… does that make sense that position setting would need to be locked also?
if you play them and they’re already out of sync, then pausing them inside a lock isnt going to fix it.
You should play the sounds inside a lock if you want them to trigger together in time.
You could also setposition all the voices inside a lock as well, which would i guess synchronize them afterwards, but you want to start them synchronized.
Im not pausing inside the lock, Im unpausing.
Ive got it working by locking on unpauses and setpositions.
thanks.
you are right. did you have paused = true inside the playsound?
yeah i load the sounds and hit playsound(paused) straight away, then control playback with a channel group pausing and setposition. Its a waveplayer app.