How to continuously play audio using Programmer Instrument

I’m new to FMOD. I want to use procedural splicing for different audios in Unity to achieve continuous playback. Currently, I have referred to the documentation Scripting Examples | Programmer Sounds to implement similar functionality, but I found that there is about a 50ms delay between SOUND_STOP and DESTROY_PROGRAMMER_SOUND, which will cause the two audio segments to sound not very continuous. Is there a way to directly reuse this programmer instrument to repeatedly play different audios?

Hi,

Could I please grab some info?

  • What version of Unity and the FMOD integration are you using?
  • Would it be possible to get a code snippet to see how you are generating the sound?
  • Could I get a more detailed explanation of the behavior you are after?

It may be possible to keep the instrument playing while swapping out the sound source

It may be easier to interact directly with the Core API for this behavior rather than trying to use a programmer sound.

Sure. Thank you for your reply .

Unity 2022.3.45 FMOD 2.0.2.23

I am designing a dialogue system and need to concatenate character names with their actions in runtime. The documentation example suggests that using a programmer instrument would be more convenient. If it can be achieved directly through the CORE API, that would be even better.

I have come up with a workaround that assigns a discrete parameter to an event. This discrete parameter has a programmer instrument associated with both 0 and 1. By immediately toggling the value of this parameter when SOUND_STOPPED occurs, it allows for the immediate progression to the next stage of CREATE_PROGRAMMER_SOUND following the SOUND_STOPPED event.
Here is part of code in the callback function:

private FMOD.RESULT PlaySentenceEventCallback(FMOD.Studio.EVENT_CALLBACK_TYPE type, IntPtr instancePtr, IntPtr paramPtr)
{
   var instance = new FMOD.Studio.EventInstance(instancePtr);

   instance.getUserData(out var dataPtr);
   
   var dataHandle = GCHandle.FromIntPtr(dataPtr);

   if (dataHandle.Target is not SentenceRuntimeData data)
   {
       UnityEngine.Debug.LogError($"cast user data to SentenceRuntimeData failed");
       return FMOD.RESULT.OK;
   }

   UnityEngine.Debug.Log($"Play sentence event: {type}");
   
   switch (type)
   {               
       case FMOD.Studio.EVENT_CALLBACK_TYPE.CREATE_PROGRAMMER_SOUND:
       {
           if (data.Sounds.Count == 0)
               break;
           
           var sound = data.Sounds.Dequeue();
           
           InitSound(paramPtr, sound.Sound, sound.SubSoundIndex);

           break;
       }

       case FMOD.Studio.EVENT_CALLBACK_TYPE.SOUND_STOPPED:
       {
           ReleaseSound(paramPtr);
           
           if (data.Sounds.Count > 0)
           {
               data.Index = (data.Index + 1) % 2;
               instance.setParameterByID(data.ParameterId, data.Index);
           }

           break;
       }
       
       case FMOD.Studio.EVENT_CALLBACK_TYPE.STOPPED:
       {
           dataHandle.Free();
           
           break;
       }
   }
   
   return FMOD.RESULT.OK;
}

private static void InitSound(in IntPtr paramPtr, FMOD.Sound sound, int subSoundIndex = -1)
{
   var parameter = (FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES)Marshal.PtrToStructure(paramPtr, typeof(FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES));
   parameter.sound = sound.handle;
   parameter.subsoundIndex = subSoundIndex;
   
   Marshal.StructureToPtr(parameter, paramPtr, false);
}


private static void ReleaseSound(in IntPtr paramPtr)
{
   var parameter = (FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES)Marshal.PtrToStructure(paramPtr, typeof(FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES));
   var sound = new FMOD.Sound(parameter.sound);
           
   sound.release();
}

It looks good from the logs. The original sequence was like this

2024/09/23 16:31:28.943 | INFO | Play sentence event: CREATED
2024/09/23 16:31:28.943 | INFO | Play sentence event: STARTING
2024/09/23 16:31:28.943 | INFO | Play sentence event: CREATE_PROGRAMMER_SOUND
2024/09/23 16:31:28.944 | INFO | Play sentence event: SOUND_PLAYED
2024/09/23 16:31:28.944 | INFO | Play sentence event: STARTED
2024/09/23 16:31:29.181 | INFO | Play sentence event: SOUND_STOPPED
2024/09/23 16:31:29.221 | INFO | Play sentence event: DESTROY_PROGRAMMER_SOUND
2024/09/23 16:31:29.262 | INFO | Play sentence event: STOPPED
2024/09/23 16:31:29.262 | INFO | Play sentence event: STARTING
2024/09/23 16:31:29.262 | INFO | Play sentence event: CREATE_PROGRAMMER_SOUND
2024/09/23 16:31:29.262 | INFO | Play sentence event: SOUND_PLAYED
2024/09/23 16:31:29.262 | INFO | Play sentence event: STARTED
2024/09/23 16:31:30.031 | INFO | Play sentence event: SOUND_STOPPED
2024/09/23 16:31:30.071 | INFO | Play sentence event: DESTROY_PROGRAMMER_SOUND
2024/09/23 16:31:30.121 | INFO | Play sentence event: STOPPED

and after modification, it has become like this.

2024/09/23 17:16:40.079 | INFO | Play sentence event: CREATED
2024/09/23 17:16:40.079 | INFO | Play sentence event: STARTING
2024/09/23 17:16:40.079 | INFO | Play sentence event: CREATE_PROGRAMMER_SOUND
2024/09/23 17:16:40.080 | INFO | Play sentence event: SOUND_PLAYED
2024/09/23 17:16:40.080 | INFO | Play sentence event: STARTED
2024/09/23 17:16:40.326 | INFO | Play sentence event: SOUND_STOPPED
2024/09/23 17:16:40.327 | INFO | Play sentence event: CREATE_PROGRAMMER_SOUND
2024/09/23 17:16:40.327 | INFO | Play sentence event: SOUND_PLAYED
2024/09/23 17:16:40.366 | INFO | Play sentence event: DESTROY_PROGRAMMER_SOUND
2024/09/23 17:16:41.116 | INFO | Play sentence event: SOUND_STOPPED
2024/09/23 17:16:41.116 | INFO | Play sentence event: CREATE_PROGRAMMER_SOUND
2024/09/23 17:16:41.117 | INFO | Play sentence event: SOUND_PLAYED

But from an auditory standpoint, there is still a noticeable delay. The performance is not as smooth as directly concatenating PCM data from two AudioClips or splicing two audio instruments together within an event.

1 Like

Thank you for the information.

If you download the FMOD Engine we include an API example user_created_sound which demonstrates how to use a callback to pass PCM data directly to a sound. On Windows, the examples can be found here: "C:\Program Files (x86)\FMOD SoundSystem\FMOD Studio API Windows\api\core\examples\vs2019\examples.sln". This may be closer to the behavior you are looking for.

To access the Core system in Unity use RuntimeManager.coreSystem.

Hope this helps.

Thank you, I will download it and study it.

However, if I am using FMOD, I prefer to directly use something like AudioTable to access data using events or keys, rather than having to worry about handling frequency or channels, which are data that PCM has to consider. I wonder if the Core API can also support resources from the FMOD project in the same way?

Thanks for the info. Apologies for the confusion, could I confirm your workflow?

Are you using audio files directly or are you using an audio table to play files from within the Studio project?

Could you elaborate on this as well, please?

I have developed a dialogue system using FMOD with AudioTable and a series of data, until I encountered latency issues.