How to simulate sound reflection and how to calculate and interpolate reverberation probes?

Hello!

Currently, I’m working on custom engine and we need to implement something similar to Steam Audio features. Unfortunately for several reasons we unable to use Steam Audio and decided to emulate those features with built-in FMOD Studio tools (at least with some custom DSPs). Same thing with Resonance Audio and Oculus Spatializer.

Below some questions:

1. How calculate reverb parameters?

Assuming, I have grid of reverb-probes and able to trace and sample all neighboring material properties. Sabine equation allows to calculate T60 for different frequencies according to absorption coefficient for these materials. But how to calculate Late and Early delay, Diffusion, Density, High Cut, Low Gain etc?

2. How to interpolate reverb parameters?

Assuming, I’ve calculated reverb properties. What is the best way to interpolate them? I see two options:

  1. Get access to Reverb return DSP on “SFX” bus and linearly interpolate all parameters.
  2. Arrange reverb-probes on tetrahedral grid, make four Reverb returns and interpolate final result by controlling Send’s amount to each Reverb return?

3. How to simulate reflection?

Assuming, I found best reflection surface for current listener. I see the following solution:

  1. Add new Return, put EQ3, Delay and Panner effects.
  2. Set via global automation parameters reflection direction, frequency dampening and delay.

Does this solution look reasonable?

Thank you in advance!

P.S. FMOD version: 2.02.20.

Hey! Can you be more specific about what you’re trying to achieve with this and why those tools aren’t suitable, as I’m not sure based on the description? It’s a fiddly area with a lot of possible perspectives of how to go about it.

I started a similar project a while ago and parked it because of the amount of workarounds it needed. A couple of things to be clear about are the relationship between the source, reverb, and listener. If you have the frequency coefficients and you’ve calculated an RT60 already, then you have the decay time and some arbitrary values to map to other reverb parameters based on the materials. You should have the surface areas and size of room from all this too. I can’t think of any that you don’t have. The documentation refers to density as modal density, and diffusion as echo density. Pretty standard to work with but a lot of those settings and how they map to your data is down to interpretation. For example, if I remember correctly then a small room could have a low echo density but with a low modal density could sound very metallic. If you wanted to use “low gain” to simulate some kind of modal room response then that’s another line of thought.

I won’t go into too many specifics as mine hasn’t been finished, but something you might find helpful or time saving is to think of the reflection cues and reverb as one thing instead of two. You ask about early and late reflections, you could think about the reflection cues as your early reflection (instead of using the parameter on the SFX reverbs) and the reverb you are trying to set as your late. Early gives most of the cues about the size of the room, listener location and the source location within it. Late reflections are the diffused version of the original and (I ended up thinking, anyway) it’s reasonable to treat that as one averaged reverb of the space - from the listener’s perspective - when that’s combined with a good set of early reflections. If you want to have several that you interpolate between, again carefully think about the relationship between listener, source and reverb, as there are a few ways to think about this and I found some results a bit disappointing. Basically, you need to avoid ending up with “pockets” of reverberation that defeat the definition of what late reverberation actually is - uniform and diffuse. Are you also wanting to have a signal path whereby sources in different rooms have the correct proportion of the source room’s reverb and the listener room reverb? There’s some cool snapshot and send tricks that people have used that might be a better route than this.

The last thing I’ll add is that I ended up believing that an advanced tap delay in combination with the ray tracing was necessary for this, so I made an FMOD studio plugin for that purpose. This gives more early delay taps than number of surfaces, with LPF per tap and tunable to how precise you need it to be. That all still needs a clean way to be controlled through the engine, and I didn’t get around to making that side of the plugin yet. Otherwise, you have to use a very messy (for practical purposes) combination of studio parameters to control it.

1 Like

Thank you so much for such detailed response!

I’ve got great results using both solutions. Unfortunately, Steam Audio get crashing and Resonance has lack of functionality as plugin for FMOD. More over, we prefer to rely on our own solution whenever is possible.

My goal is to achieve believable or pleasant reverberation for each area of entire game level (first person shooter), air absorption, occlusion, transmission, pathing and reflection. Air absorption, occlusion and transmission are already done.

My first naïve attempt was based on estimation of T60 using Sabine’s equation, room volume and area. Volume and area were estimated using rays-casts from listener’s position. Result was very unstable and I’ve moved to the next approach.

Second attempt was based on path tracing. I’ve fed generated IR (basically just energy amount on appropriate time samples) and got dirty but pretty believable results. Especially if the listener in the center of the room. Since built-in Conv Reverb DSP not suitable for real-time update, now I’m focusing on estimation of Reverb parameters based on sampled data.

Below some results. It’s binned IR (energy, 2000 msec) for corridor with connected rooms (80x8x6 meters) with material absorption a=0.1. Histogram allows to compute T60.

Problem:

Now I’m trying to figure out how to estimate early and late delay from this histogram, and how to estimate diffusion, density, high cut and low gain.

Another problem is that path tracing produces extreme reflected sound energy when listener is too close to the wall. And the early delay occurs too early even for very large rooms.

Yeah, I think that two reverb returns for left and right channel would be great idea to simulate sound reflection.

1 Like

That’s an interesting adjustment to approach, looks like you’re having fun.

Yeah, the ray cast approach is only really effective if you gather the material data from each hit and have enough secondary reflections and associated data to generate the discrete early reflection delays and have a reasonable estimate of the late reverberation settings. Otherwise I could imagine it being unusable. With a geometric approach accurate diffraction will be a problem. A fun part for me was tracking scattering properties and using that to adjust the reflected ray angle. That all takes a lot of rays though! Not so effective with sounds from other rooms but I combined that with the pathfinding system to have that work with low overhead. But the neat part about that approach is you have everything you need for most other parts of the system (e.g occlusion and obstruction). It’s not acoustically accurate like wave propagation simulation, but it’s a nice enough estimation and doesn’t require the complex maths. For commercial purposes all of these would need a lot of optimisation to run okay in real time, that’s why I was wondering if you weren’t better off with an existing solution.

Is this IR / path tracing approach aiming to be used for pre-computed paths or in real time?

:slight_smile: