I have an application in which I want to generate drawing sounds based on input from a Wacom stylus.
So far I am stuck at synchronizing the generated sound to input from the pen. I tried to use both pcmreadcallback and a custom memory buffer. In both cases I started with larger buffer size (to test if everything works) and then reduced the buffer size to minimize latency. However, when using 1000 Hz audio as input I was unable to go below a buffer size of 20 samples, i.e. ~20 ms of latency, without audible artifacts. This is a serious issue for fast strokes as they get registered only after they end. I don’t think the issue is in my pcmreadcallback implementation as one execution takes ~0.4 ns.
I wanted to ask if fmod has some low latency mode. Ideally I would like to be able to directly stream data to fmod at 1000 Hz. Then I could update this data with input from the stylus on the fly.
The best latency way to get audio into FMOD is to use a custom dsp effect, you can see an example in the dsp_custom example.
20 samples @ 48khz is .4ms , not 20ms btw.
It is better not to have extreme small buffer sizes, and just update with your own low granularity inside one of fmod’s blocks. If a block of data in fmod is 256 samples, you can still update your own parameters at 128 or 64 or 4 samples at a time if you want. This is what FMOD Studio does to get sample accurate playback.
256 samples is a good block size anyway as it is only 5ms which is pretty imperceptable.
Thanks!
Custom dsp seems to be the correct way to go. Though I believe the delay in the code is caused by my settings. I use the generated audio to drive a vibration motor. That’s why I use 20 samples @ 1khz. But I guess fmod has to output to the sound card at 48khz and that is introducing the problems.