realtime stitching

In fmodex previous version, there was a realtime stitching example.
It allows to dynamically with code, create gapless sentences using sub sounds.

I can’t make this work using latest fmodstudio low level api.
The setSubSentence API is gone and when I call play on a sound setup with subsounds, I am getting ERR_SUBSOUNDS.
Any idea how to do that now?


FMOD::Channel* channel = NULL;
FMOD::Sound* sound = 0;

memset ( &info, 0, sizeof( FMOD_CREATESOUNDEXINFO ));
info.cbsize = sizeof ( FMOD_CREATESOUNDEXINFO );
info.defaultfrequency = 44100;
info.numsubsounds = this->mQueue.size();
info.numchannels = 2;
info.format = FMOD_SOUND_FORMAT_PCM16;
result = system->createSound ( 0, FMOD_LOOP_OFF | FMOD_OPENUSER, &info, &sound );

int i = 0;
for (std::vector< MOAIFmodSound* >::const_iterator itr = this->mQueue.begin(); itr != this->mQueue.end(); ++itr) {
result = sound->setSubSound( i++, ((MOAIFmodSound )(itr))->mSound );
result = system->playSound ( sound, 0, true, &channel );


Is there an example somewhere?
How would that work if I want some subsounds to loop and only when I turn off the loop, the next sounds play without a gap? I would need to calculate what’s left to play in the current sound and set the delay on the next sound channel with that?

I know Banks with Events are the way to go to do that but I am trying to replicate that functionality with fmod low level to support an old project.

Ok with a looping subsound that stops and continues to another subsound, you wouldnt have been able to do that with setSubSoundSentence anyway.
Yes you would pretty much calculate the next clock that the current loop ends in, then start your second sound at that exact point. You would setMode(FMOD_LOOP_OFF) and setDelay(newtime) together at the same time.

I havent got an example for you, this is what I wrote for another customer. We will add an example for the next release.

To play a second sound exactly when the first sound ends:
The best technique is to look at ChannelControl::getDSPClock and get the channel’s parent group’s clock. If you don’t specify the channelgroup in playSound, this will be the master channel group (System::getMasterChannelGroup).

Now add a few mix blocks delay to that clock value (add System::getDSPBufferSize block size, ie 1024 default on Windows), so in the windows case, 2*1024 = 2048 should be enough.

call System::playSound for the first sound with 0 or your channelgroup as the channelgroup parameter, with paused = true, and while paused, call
FMOD_RESULT F_API setDelay (unsigned long long dspclock_start, unsigned long long dspclock_end, bool stopchannels = true);
with dspclock_start = your value calculated above

call playsound for the second sound whenever you want, you can do it immediately if you know this far in advance what you want to stitch to the end of the first sound.
with dspclock_start = value calculated above + (length_of_sound_in_output_samples).

length_of_sound_in_outputsamples = sound_length_pcm * output_rate / sample_rate_of_sound;

Use Sound::getLength with FMOD_TIMEUNIT_PCM to get sound_length_pcm
Use System::getSoftwareFormat for output_rate
Use Sound::getDefaults for sample_rate_of_sound.

What this does is queue up sounds with exact precision in advance.
You can also tweak that second offset and do an overlap or crossfade if you want (See ChannelControl::addFadePoint)

In FMOD Studio, sentencing is deprecated in favour of the ChannelControl::setDelay feature. You can get precise stitching, and something the old API couldnt do, micro crossfades if you want (ChannelControl::addFadePoint).
You play each sound you want to stitch practically at the same time if you want to, offset the first sound to a short time in the future (ie one mix block) then the second one is that time + length of sound in output samples (ie 1 second = 48000 samples if thats what the system is set to).

Hi Brett,

And would you recommend this model to create dynamic commentaries for speakers in a sport game ?
We are actually switching a project from Ex to Studio, and this option looks quite complex to me in comparison of the previous setSubSentence to build sentences with variables (like name of players, etc…), or I missed something?
Thanks for your hint!

There is an example in the examples folder called granular synth.

To stick sounds you queue them into the future. Its not difficult to implement and is a lot more flexible (ie you can cross fade 2 sounds if you want, or stitch sounds of different format or channel could) and stable (trying to stitch stuff in the stream thread or mixer thread could cause all sorts of issues, especially if user callbacks were involved)

The granular synth example has a function called queue_next_sound which does all of the clock work for you.