FMOD_DSP_BUFFER_ARRAY buffers guidance?

Hi,

I’ve been messing around with creating custom DSP’s for FMOD Studio. I think I have understood the core logic by looking at the examples and creating a ‘Mute’ plugin. However, I’m a little confused on how exactly the buffers work.

For example, does setting the number of buffers in the FMOD_DSP_DESCRIPTION correspond to the number of buffers given in the BUFFER_ARRAY?

I did this for my ‘Mute’ plugin:

unsigned int samples = length * inbufferarray[0].buffernumchannels[0];
{
while (samples--)
*outbufferarray[0].buffers[0]++ = *inbufferarray[0].buffers[0]++;
}

I’m a little confused how this code affects all channels though. In my head, I imagine you would have to call

for (int i = 0; i < numBuffers; i++)
{
for (int y = 0; y < length; y++)
{
*outputbuffer[i][y] = value;
}
}

And this thinking is a little bit from working with JUCE (I admit my knowledge in that is limited too, though):

float* leftChannel = buffer.getWritePointer(0);
for (loop over samples)
{
float sample = audio logic;
leftChannel[i]= sample;
}

I’ve asked a lecturer about this and I think I’ve got a little more understanding but am I correct in thinking *outputbuffer++ will eventually go over all buffers? Or does it only go over the i’th array? If it does go over all buffers, how does it not go out of bounds?

I think I’ve got the length * channels -as it will give you every sample in all channels- but not how it works with buffers and using ++.

Any help will be appreciated.

Thank you,
James

ok to start, dont worry about looping through numBuffers, just always use buffer[0].
The capability to handle multiple incoming buffers from a dsp that output multiple buffers was planned, but never implemented. The idea was to allow splitting of a signal via buffers, but you can just do it with connections, like the dsp_effect_per_speaker example in the SDK.

The only thing you need to do is multiply length * channels like you said, to get all the floats in the whole buffer.

Ah, thank you very much. That is making sense now.

So if that’s the case, buffer[0] contains each channel, one after the other. And to find the order you use the channel mask?

Yes the PCM layout is specified here
https://www.fmod.com/resources/documentation-api?page=content/generated/overview/terminology.html#/

Unless you care about speakers (ie for panning), the channelmask is usually irrelevant, but it will tell you if the signal coming through is meant for different speakers or not. Altering the channel mask from the standard speaker configuration is very rare and usually set up by someone using DSP::setChannelFormat (ie you can say ‘this signal is mono but is actually the LFE’)

Hi again Brett,

I’ve got a little stuck again on channel order.

The documentation says a left right pair is one sample but what happens with more channels?

My current assumption from some of the examples is it goes Left Right, Back Left Back Right (so on for all channels) before repeating that order. Rather than Left Right, Left Right till ‘length’ then BL BR, BL BR, which is what I thought at the start.

Is there documentation on altering one sound channel differently to the others? Maybe making something like the Channel Mix plugin provided in Studio?

And the final confusion for me is doing “length in samples times channels” as I feel you would get an inaccurate number if one sample is two channels.

For example, if you had one sample (a left right pair), it would be ‘1 * 2’ for one sample times two channels / stereo. You would then loop through 2 samples but there’s only 1.

However, I can tell I’m missing something and have come to the wrong conclusion.

Thank you again,
James

I’ve just had a mess around with stuff and I think I have understood everything correctly.

I’m currently under the impression that the sample data passed from the buffers are 32-bit floats and channels are set out sample after sample, and definitely not block of channels after block of channels, or multiple channels in one sample.

In the Terminology / Basic Concepts page, the buffer is 16-bit, so of course two channels would go into one 32-bit sample. I think I got confused between this information and the information in the DSP documentation saying “Pointer to outgoing floating point -1.0 to +1.0 ranged data”. I had assumed -somehow- that two L R channels were being summed into one sample.

In reply to myself where I was confused about length * channels, of course there is not two channels in one sample so this now makes complete sense.

Thank you again Brett for your help!

they are blocks of channels, the diagram is pretty clear on how the data is laid out.

8 floats in a row for a 7.1 speaker mix is 1 sample.
then the next 8 floats are the 2nd sample. It is interleaved.