Capture listener's audio output and then playback elsewhere in the 3D scene

Hi !

I am a Linux software developer working for a company specialized in 3D real-time simulators.

I come from OpenAL Soft API with basic knowledge in audio programming.
I’m now looking for a new feature OpenAL does not seem to provide :

  • While playing sounds inside a 3D audio scene (with multiple 3D-positionned sound playbacks at the same time), I’d like to record the da ta “eared” by a listener object (with good 3D positioning) instead of directly transferring it to the OS output audio device (speaker). Is this possible with your API ?

  • I have to buffer the resulting audio data from the listener in the same audio scene and replay it as a mono stream elsewhere in the 3D scene with a small latency, potentially generating an audio recursion (if the sound playback is then eared by the current recorded listener).

It’s a kind of virtual microphone —> speaker link present inside the 3D scene, with
control to final output data… Or a kind of “audio render-to-texture” system if compared to the camera eye and 3D graphics rendering…

Do you have such a feature ? Or something related inside FMOD EX library ?


Hi Deboute,

You can get this setup by using the Transceiver effect.

I’m not sure if you are using FMOD Studio or just the low-level API, but you should be able to set the effect (or DSP) to send from the bus/group channel where the sound recording is coming from, and then place the transceiver receiving this sound onto a 3D event or 3D sound. This event/sound must also have its own spatializer to get correct 3D positioning.