I am a Linux software developer working for a company specialized in 3D real-time simulators.
I come from OpenAL Soft API with basic knowledge in audio programming.
I’m now looking for a new feature OpenAL does not seem to provide :
While playing sounds inside a 3D audio scene (with multiple 3D-positionned sound playbacks at the same time), I’d like to record the da ta “eared” by a listener object (with good 3D positioning) instead of directly transferring it to the OS output audio device (speaker). Is this possible with your API ?
I have to buffer the resulting audio data from the listener in the same audio scene and replay it as a mono stream elsewhere in the 3D scene with a small latency, potentially generating an audio recursion (if the sound playback is then eared by the current recorded listener).
It’s a kind of virtual microphone —> speaker link present inside the 3D scene, with
control to final output data… Or a kind of “audio render-to-texture” system if compared to the camera eye and 3D graphics rendering…
Do you have such a feature ? Or something related inside FMOD EX library ?