I did a bit of research but couldn’t really figure this out myself. Does anyone know what would be the most efficient way to apply reverb on sounds depending on the player’s location (in short using reverb zones)?
I’ve been thinking about creating reverb parameters and setting their values on trigger enter but that seems very redundant. Any easier way to do this?
I am currently tackling the same problem, therefore I need help on the same issue.
In a Unity Component I am trying to create Reverb3D objects:
In Awake of the MonoBehaviour:
FMOD.RESULT status;
FMOD.System system = null;
status = FMOD_StudioSystem.instance.System.getLowLevelSystem(out system);
Debug.Log(Enum.GetName(typeof(FMOD.RESULT), status));
status = system.createReverb3D(out reverb);
Debug.Log(Enum.GetName(typeof(FMOD.RESULT), status));
props = new FMOD.REVERB_PROPERTIES(decayTime, earlyDelay, lateDelay, hfReference, hfDecayRatio, diffusion, density, lowShelfFrequency, lowShelfGain, highCut, earlyLateMix, wetLevel);
status = reverb.setProperties(ref props);
Debug.Log(Enum.GetName(typeof(FMOD.RESULT), status));
FMOD.VECTOR position = FMOD.Studio.UnityUtil.toFMODVector(transform.position);
status = reverb.set3DAttributes(ref position, minDistance, maxDistance);
Debug.Log(Enum.GetName(typeof(FMOD.RESULT), status));
status = reverb.setActive(true);
Debug.Log(Enum.GetName(typeof(FMOD.RESULT), status));
All FMOD.RESULTs return OK, but the reverb zone does not work. I am not a sound engineer, but do events need
special flags in FMOD Studio or is it necessary to manually enable reverb effects for the FMOD system instance?
To simulate a reverb zone in Studio we reccomend you create a snapshot and then control the intensity of the snapshot using an automation on the distance parameter. This allows you to create a position reverb that can also control other effects as well. You can also automate the intensity through a game parameter which could then be controlled from scripts.
Hey guys, I’m probably a little late to the party - so most likely there is even a smarter solution by now, but as an addition from testing around in sandbox mode I noticed (additionally to pete’s great idea) that “panning” the reverb via surround direction and extent using built in distance & direction parameters results in really natural, “trackable” reverb tails.
As a usecase, I was imaging an explosion in something like battlefield outside of a cave.
In a practical implementation approach, something like this might be cut due to ressources but just as an input
Hmm… Is it intentional that you’re using the location of the snapshot instance to automate the surround direction property of the return bus? It seems like it’d result in the apparent position of the reverb’d sound circling around the listener as the listener approaches or moves away from the position of the snapshot instance, and also in its apparent position turning with the listener whenever the listener rotates.