Big thanks!
You mean, even if I’m only targeting Android 8.0 and up, there’s no magical thing in AAudio so I still need to configure the settings exactly the same way as OpenSL ES, right?
It’s been a long time since I last used FMOD Studio on Android, so I had to read all the documents over again, and this time, I’m a little confused.
To be honest, VERY CONFUSED!
From one of your documents about Android Audio Latency, it says :
Devices which report FEATURE_AUDIO_LOW_LATENCY will be able to achieve lower latency playback, especially if the below tips are followed.
For API level 17 devices using the OpenSL output mode you can achieve lower latency by using System::getDriverInfo to fetch the recommended sample rate
and applying the value to System::setSoftwareFormat, by default the sample rate is 24KHz to keep CPU overheads low, using a higher rate will cost more CPU time.
“by default the sample rate is 24KHz to keep CPU overheads low”
Yeah, this one was what I referred to when I last wrote my android app using FMOD Studio.
But this time I found this too.
In the page, it says :
It is also highly recommended that you initialize the FMOD Java component, this will allow loading assets from the APK and automatic configuration for lowest latency.
This explanation was about calling org.fmod.FMOD.init(this) on the Java side.
In fact, I did this too without thinking too much last time, because I had to call it for assets in the APK anyway.
But I also found this page :
Core API - Android - Improved latency by automatically resampling to the native rate to enable the fast mixer.
This seems to be about org.fmod.FMOD.init() too, so now I’m in doubt if above manual settings are still required.
- I understand we need to have the right configurations for low-latency whether we use AAudio API only or not, but what I’m wondering now is,
if I call org.fmod.FMOD.init(), then won’t all the settings be configured automatically?
Because I was so curious, I decompiled the fmod.jar file, but the init() method did nothing but set a static Context variable.
There were also some helper methods such as supportsLowLatency(), getOutputSampleRate() and getOutputBlockSize(),
(which check FEATURE_AUDIO_PRO, FEATURE_AUDIO_LOW_LATENCY flags and PROPERTY_OUTPUT_SAMPLE_RATE, PROPERTY_OUTPUT_FRAMES_PER_BUFFER properties…)
so I’m just assuming FMOD would configure itself on the native side when the Context variable is set by init().
Also, reading the first quote again, I’m getting confused even more.
- How can applying the sample rate fetched using System::getDriverInfo() to System::setSoftwareFormat() help when you say
“24KHz to keep CPU overheads low, using a higher rate will cost more CPU time.”?
Most smartphones’ sample rates these days are either 44.1KHz or 48KHz, meaning more CPU time according to your explanation.
I personally use a lot of DSPs like FMOD_DSP_TYPE_SFXREVERB and FMOD_DSP_TYPE_PITCHSHIFT, and I even call FMOD_Channel_SetFrequency().
In your documentation somewhere, I also saw resampling can’t be avoided when DSPs are used.
So I don’t think it’s that beneficial to increase CPU time in order to enable the fast mixer. But, well, I’m just guessing here.
On the other hand, what if I keep using your default value 24000 but my source files are recorded in 48000 or 44100.
Resampling all those files may require more CPU time in this case.
Maybe resampling is less expensive than processing high sample rate audios, and all these are just a balance between trade-offs problem?
Because I have no idea at all how FMOD Studio works internally, I’m also guessing this.
System::setSoftwareFormat() may set OpenSL ES’s SLDataFormat_PCM.samplesPerSec value ONLY.
Or it only sets the FMOD API’s configuration for internal processing. (And then resample again to match OpenSL ES’s right before sending the audio?)
Or both.
- Which one of the above is true?
FMOD DSP network is one of the important aspects in FMOD, right?
In this link, it says :
Wavetable Unit : This unit reads raw PCM data from the sound buffer and resamples it to the same rate as the soundcard. A Wavetable Unit is only connected when the user calls System::playSound.
Once resampled, the audio data is then processed (or flows) at the rate of the soundcard. This is usually 48khz by default. (22khz on iOS)
This confuses me again about two things.
-
“the same rate as the soundcard”? Did you want to say the same rate as the value set by System::setSoftwareFormat, didn’t you?
If not, it’s very confusing and I need an explanation on this.
-
Also it’s unclear if it will resample just once when System::playSound() is first called and keep it, or will resample everytime the audio data is processed during the playback.
As for the sound created with FMOD_CREATESTREAM or FMOD_CREATECOMPRESSEDSAMPLE which needs DSPCodec Unit’s help, the latter makes sense, but as for FMOD_CREATESAMPLE, I’m not sure about that.
When FMOD_System_CreateSound() is called with the FMOD_CREATESAMPLE flag, I know the sound buffer will contain uncompressed PCM data.
But what about the sample rate of the sound?
- Will it be resampled to match the FMOD’s configuration too?
Sorry for spamming questions like this but because high latency really matters on my app, I need to clear these things up before starting the project.