I’ve looked at fmod_dsp.h, fmod_noise.cpp, fmod.hpp and a good chunk of the rest of the files in the API examples. Over the last three weeks, I’m no further along with this. I’m perplexed because it seems noise is generated without creating or referencing a delta time or delta time step, declaring block size or using doubles.
Simply changing from generating noise to a sine wave is impossible. I can’t get anywhere as there is no commenting about what is happening as far as I can see.
There’s documentation on the structure of the DSP network but no documentation on the structure of the Plugin API.
Are you able to share your code? We do have work to do on the plugin API to make it much easier to work with, but in the meantime we should be able to help you out here. (It would benefit other community members with similar questions too!)
My plan is to make an additive synth + shaped noise type thing, I’ve been basing this off of the fmod_noise.cpp, I’ve gotten so far that I can generate a sine wave with the function bellow:
This does generate a sine wave, I can change the value (12.f) to change the frequency but I can only change it to even integers, otherwise I get mirroring sine waves like this:
Okay, there’s a couple of issues I can spot there.
First, the samples generated by the generate function need to be interleaved into the output buffer. The interleaved buffer format groups samples by channels, so the first n entries are the first sample for each of the n channels, the second n entries are the second sample for each of the channels, and so on. Another way to think of it is that the buffer is a sequence of sample frames where each frame is a single sample for each channel. The length parameter of the DSP process callback is the number of sample frames.
This diagram shows how three frames of sample data for 5.1 surround would be interleaved:
fmod_noise.cpp does have a comment mentioning this interleaving requirement, and the first loop which deals with gain ramping to avoid pops does explicitly interleave the output samples by iterating by samples then channels, but unfortunately it doesn’t demonstrate interleaving the generated signal for the main loop. Because it’s a noise generator it doesn’t bother to generate discrete signals for each channel.
It might be easier to understand if a generate function was evaluated for each channel. In your case to generate a sine wave:
for (unsigned int s = 0; s < length; ++s) // for each frame
{
for (unsigned int c = 0; c < channels; ++c) // for each channel
{
*outbuffer++ = sinf((2 * PI) * m_rotation);
}
m_rotation += m_frequency / sampleRate;
}
You might notice that I changed your evaluation code slightly there, introducing the variable m_rotation which is incremented for each sample frame. This brings me to the second problem I can see with your current implementation - your generator needs to keep track of its position as it generates so that successive calls to your DSP process function will generate sequential sample frames instead of just generating the first n sample frames each time it is called. After a frame is generated the rotation is incremented by the desired frequency of the sine wave (in degrees per second), divided by the system sample rate (in hertz). The system sample rate can be retrieved via the FMOD_DSP_STATE passed to the process callback:
// Get the system sample rate. Error result handling ommitted for clarity.
float sampleRate;
dsp->functions->getsamplerate(dsp, &sampleRate);
I hope this answer gets you unstuck, but please don’t hesitate to ask for more help if anything is unclear, this will help guide us in improving the quality of our documentation!
I’ve implemented everything that you mentioned but I’m still getting broken sine waves.
Should I be using the DSP API or the Plugin API? The documentation and what you’ve said gives me conflicting ideas that they’re the same thing and two different things.
The starting ramp didn’t seem necessary so I removed that part, didn’t seem to affect anything with the example I had running the code in the previous post. Seems to me that I’ll get more flexibility using an AHDSR.
The “getsamplerate” function can’t accept a int, so I changed it to that.
I’m getting errors that m_frequency, m_rotation and m_samplerate and undeclared and undefined in the generate callback, should I be adding them to the arguments of the generate callback? For the meantime I’ve set m_rotation declaration to 0 and frequency to 158400 degree per second (440Hz). I included the get sample rate segment in the dspprocess callback and edited the generate callback to include the sample rate as one of the arguments.
With this implementation I get this:
That signal is at ~14kHz for some reason.
The code changes are as below:
void FMODNoiseState::generate(float *outbuffer, unsigned int length, int channels, int samplerate)
{
float m_rotation = 0.0f;
// Note: buffers are interleaved
for (unsigned int s = 0; s < length; ++s) // for each frame
{
for (unsigned int c = 0; c < channels; ++c) // for each channel
{
*outbuffer++ = sinf((2 * PI) * m_rotation);
}
Sorry Chris, I should have spelled out that m_rotation needs to be added as a member variable to the DSP state (FMODNoiseState). It should be set to zero in the reset function. Resetting it to zero each time you call generate is what’s causing that discontinuity.
The higher than expected frequency is because you multiplied the frequency in Hz by 360 to convert to degrees per second but that conversion was already being done implicitly in the call to sinf. The result is that your output frequency will be 360 times the desired frequency (15.84kHz instead of 440Hz).
Regarding the API, our plugin API is composed of the DSP API, the Output API and the Codec API - the DSP API is absolutely the right choice here. (It should be feasible to build a generator using the Codec API but it’s definitely not recommended!)
Thanks Derek, for clearing that up. You can probably already tell that this is unexplored territory for me.
I’ve zeroed the m_rotation variable in the reset function set it as a member variable in the FMODNoiseState class. I’ve tested m_rotation in both public and private and I get the same result.
I’m getting a steady sine wave, no discontinuity. However, the frequency jumps to either side of the specified frequency after a while. This is repeatable and doesn’t change on subsequent playings, but does change depending on the frequency specified in code.
The first is set to 440Hz, the second set to 1075Hz
Pretty drastic drop at the end of this one.
You can also see the building of harmonics scattered all about.
Could these be results of cumulative floating point errors? I’ve been looking at other synthesis frameworks and they use doubles for buffer values and such. Would that help?
I don’t know what might cause that and I’m afraid it’s really beyond the scope of support we can offer on the forums. It may be a range/precision issue in which case keeping m_rotation normalized in the +/- 180 degree range would fix it. Perhaps other forum users might have some insight into this phenomenon.
If you have further questions about using our API we’re always happy to help answer those!
After looking it over with one of our programmers, the errors and drifting frequencies was caused by cumulative floating point errors.
As long as the maths before the outbuffer is done in doubles, these problems should be mitigated.
double m_Freq = 1175.0;
for (unsigned int s = 0; s < length; ++s) // for each frame
{
for (int c = 0; c < channels; ++c) // for each channel
{
*outbuffer++ = sin((2.0 * PI) * m_rotation) * (sin((2.0 * PI) * m_LFO_rot));
}
For a periodic signal - x (t), it is true that x (t + T) = x (t), where T is a signal period. After m_rotation exceeds unity , the signal will repeat,
so you can and should limit the range of m_rotation.
if (m_rotation> = 1) m_rotation- = 1