The game project that I’m currently working on mostly uses sounds that are synthesizing at run time. I had been using OpenAL for playback, but recently migrated to fmod for the variety of audio file formats supported. For most of my sounds it was very simple to switch over since each of these sounds were static, and do not change after they were initially synthesized.
However, I’m now trying to create a new dynamic synthesized sound effect. The playback of this sound should change based on the speed of certain in game objects, and thus I would like to stream the PCM data that is generated based on those values. However it’s not a simple pitch, or volume change. Instead it’s a frequency modulation shift. The problem is I can’t find a native way within the API to synthesize, and stream them efficiently.
What I have tried, that does work is horribly inefficient. Right now I’m creating a user sound with samples holding a constant value of +1. Then I create a Channel group for each instance of this sound, and then apply a DSP effect to synthesize the sound I wanted in the first place. The input to this DSP effect is multiplied against the sound that was just synthesized. That way the 3D audio calculations performed by fmod are taken advantage of and reflected in this synthesized sound.
This works. However the amount of CPU usage for this technique is huge. To me, what I’m trying to accomplish seems like a pretty trivial thing to do (Could be done using buffer queuing in OpenAL). Is there a right way to do this easily within fmod?