Multiple reverb zones depending on where the sound is

Good Afternoon,

I am in need of some help with a system in our project and its relation to FMOD. We are using FMOD Studio V1.10.19 and Unity 2018.4.16f1.

We are currently implementing Reverb Zones and came to the conclusion that using snapshots for the reverbs is not good as it limits to only having the reverb applied to where one of the players are.
Multiple players (in multiplayer) and AI (in singleplayer) can be in different reverb zones, and should be heard with those reverbs.

Does anyone have any idea on how one could achieve this? We’ve already tried the following, and indicate why it did not work well:

  • Transceiver: does not allow to automate the channel value it sends too.
  • Manually setup the sends in all events and have a parameter change the one to which it gets send too: not feasible, as one would require a lot of manual setup, due to the amount of audio events there are and reverbs. Events will have over 20 sends to reverbs, due to them being available in multiple levels, which all have a different reverb zone(s). Unfortunately Effect chain is not available on this version of FMOD.
  • Automazing parameters of the Reverb effect on each event - A lot of performance cost as each event would have its own reverb effect instead of having one on the necessary mixer bus.
  • Snapshots: One can only have one snapshot active at the same time (or at others, but they will blend) making it so that the only reverb which is active is where the main player is.

Thank you,
Robin

1 Like

There is no easy way to do this without preset effect chains, short of creating multiple possible routing paths for each event and changing which is used via sends and automation or giving each event its own reverb effect controlled by automation. You could potentially automate your snapshots’ intensities to apply the average reverb from all your listeners, such that each and every event instance is subject to the average of the reverbs it would experience were it audible to all your listeners, but this will only make the issue less obvious rather than solve it.

In a perfectly realistic simulation, each event instance would be routed into a unique reverb effect whose behavior depended on the location of the emitter relative to the listener. Unfortunately, this kind of perfectly accurate simulation is prohibitively expensive, so the usual method of approximating it is to use a single reverb effect for all event instances, and determine its properties based on the position of the listener. This method works well in most cases, but as you’ve observed, it does not mix well with splitscreen multiplayer.

Greetings! Our team is developing a 2D platformer in Unity and I faced with the same problem. Simple example: within the screen, the character is standing in a cave (for example, on the left side of the screen), and the mob is standing outside the cave, outdoors (on the right side of the screen), in the center of the screen is the cave entrance (see image below). Both make some sounds. It is expected that the sounds of the character will be with the cave reverb, and the mob will be with outdoor reverb. The task doesn’t seem very difficult, but I don’t understand how to do this using snapshots.

It is also unclear how to set the minimum and maximum distance for reverberation fading in FMOD, like in the native Unity component Audio Reverb Zone.

Thank you.

So, you want event instances on the left side of the screen to be affected by the cave reverb, and event instances of the right side of the screen to not be affected by the cave reverb? That’s an unusual way of handling reverb; most games base reverb on the position of the listener or player avatar, rather than the position of individual emitters.

Assuming that this unusual method is what you want, snapshots will not help you. They’re useful for doing the kind of reverb used in most games - but since your game is doing reverb in an unusual way, you’ll need an unusual method. I recommend using sends automated on a local parameter so that you can give event instances different routing depending on their in-game locations:

  1. In FMOD Studio’s mixer, create a new return bus for each location with a unique reverb. These buses will be used to mix and process the signals of event instances in their respective locations. (It is necessary to create one bus for each location because you need different event instances to be affected by different locations’ reverbs at the same time.)
  2. Add a reverb effect to each return bus’ signal track, and set that effect’s properties as appropriate for the corresponding in-game location.
  3. In the preset parameters browser, create a new preset parameter named “Location,” or something similar. Set its parameter type to “User: Labelled,” give it labels based on each location with a unique reberb, and make sure it’s local rather than global. This parameter will allow you to specify which reverb a given event instance should be affected by.
  4. In the preset effects browser, create a new effect chain.
  5. Add to this effect chain sends to each of the return buses you created in step 2.
  6. Add a gain effect to the effect chain, drag it to the right of all the sends, and set its value to -oo dB.
  7. Automate the levels of the sends such that they’re 0 dB when the “Location” parameter is set to the label corresponding to the return bus they target, and -oo dB when it is set to any other label.
  8. Add the effect chain to every event that you want to exhibit different reverb depending on its location.
  9. In your game’s code, set the value of each event instance’s “Inside cave” parameter based on whether it is inside the cave or out of it.

Because you want to change each individual event instance based on its location instead of changing the project’s mix based on the position of the listener, setting a minimum and maximum distance the listener should be from the reverb zone for that zone’s reverb to have an audible effect would be meaningless.

Hi Joseph! Thanks for such a detailed reply. So, I see that the reverb implementation I described is unnecessarily complex. Although, it is intended to be the most realistic simulation. If I understand it right, the best way to implement reverb in my case of 2D platformer is to use snapshots for a single reverb effect on the return-bus, send this reverb to the SFX-bus (inside which there are all the events to which the reverb effect should be applied), and change the reverb parameters by switching snapshots (e.g. Cave → Outdoor) when the character enters the corresponding trigger-zones. Is this correct? However, with this implementation, the cave reverb (when the character is inside it) will be applied to the mob outside the cave, which doesn’t seem quite correct (see the diagram above).

How to set min/max distances for snapshots in this case, so that reverbs appear/disappear smoothly when moving towards/away from them?

Yes and no. It’s not necessarily the character’s position that’s important; it’s the position of the listener. Most third-person games attach the listener to the camera rather than the character, though there are exceptions.

Whether this method is the “best” way is a matter of opinion, but it is a popular way used in many games, and most players accept it without batting an eyelid.

It’s true that in the real world, sounds are affected by the acoustic properties of their locations of origin. However, they aren’t only affected by the acoustic properties of their locations of origin, because sounds travel: Every sound that an observer hears, regardless of its origin point, must travel to the observer’s location in order to reach the observer’s ears, and so will be audibly affected by that location’s acoustic properties as well. The most realistic option would therefore be for every sound to be affected by the acoustic properties of both its origin point and the listener’s location. To do this, you would need to use the nine-step method I detailed in my earlier post for setting up reverb of each individual event instance based on their locations, and the simpler method for setting up reverb based on the listener’s location.

However, the most realistic option is not necessarily the best one. Just as 3D modelers often use fill lights to illuminate areas that would realistically be in darkness, and action games often feature unrealistic “coyote time” to make their jump mechanics less frustrating, game sound designers often implement unrealistic sound behavior in the name of making their games more playable and readable.

Accordingly, most games base their reverb only on the location of the listener. After all, people are used to all sounds they hear being affected by the acoustic properties of their current location, regardless of whether those sounds originate within that location or outside of it, so this rarely sounds unnatural. (In some cases, they will supplement this system with special-case behavior designed to account for edge cases, for example by designing special versions of important events that seem likely to appear near the border between areas.)

Again, whether this is the best possible way of doing things is a matter of opinion. Every game is unique, and only you and your team know your game’s unique requirements, so only you can say whether it fits your game.

Assuming you’re using snapshots, there’s a few different ways you could do it.

The simplest is to automate each snapshot’s intensity property on a built-in distance parameter such that the intensity is high when distance is low, and low when distance is high. As described in our documentation on the topic, each snapshot instance is associated with an FMOD Event Emitter, meaning that it has a location; as the distance between the listener and the emitter smoothly decreases, the intensity of the snapshot will smoothly increase. You’ll probably want to tweak the automation curve a little to get the exact behavior you want.