Automations across overriding snapshots

Incoming NOOB question.

We are automating parameters in a reverb to dynamically control it. However, the reverb on a single return bus, does not hold automation from one overiding snapshot to another. We also can’t create a simple plugin preset. We can’t even copy the properties of the reverb with the automation and paste it onto the same reverb in a different snapshot. And we can’t put a shared effect on a bus. This seems like really basic functionality, so surely I’m missing something. Do we need to be using blended snapshots? That’s the only other thing we can think of. Not being able to save presets for the reverb is pretty frustrating. We have multiple levels with multiple snapshots and each level has 5 generic reverb returns that each correspond with various snapshots. I want to be able to dynamically control the reverb based on distance from walls etc and we think a single reverb is the easiest solution, but also worry about issues with other sounds in the reverb being effected by that automation as well as smoothly transitioning from one room to another without any weird reverb cutoffs. We could really use some insight, best practice advice, and a solution for getting the automation to work in any snapshot. Thanks!

Sorry for the delayed response!

Can I get you to elaborate on how the automation on the reverb effect doesn’t hold across multiple overriding snapshots?

Unfortunately, effects presets aren’t compatible with the mixer at the moment, but I do agree that their addition would make implementing effect behavior across multiple buses a lot easier. I’ve noted your interest on our improvement tracker.

You may want to look into using the Studio Scripting API to apply property values/automation more easily across snapshots/reverbs. The Scripting API docs are admittedly a bit lacking, so if this sounds like something you’d be interested in, I’d be happy to create a mock script to demonstrate some of the functionality you’d need.

Using a channel-based approach of multiple reverb returns is generally what we recommend. It’s simple to set up, has minimal CPU overhead, and smoothly transitioning between each reverb return can be handled using parameters to automate the reverb send levels on event instance in the space. The parameter value can then be set from the engine based on the position of the instances with respect to each room. Dynamically adjusting reverb properties based on the listener’s position is entirely possible with this approach,

However, dynamically adjusting reverb properties with regards to the position of individual objects in the space isn’t really possible with a channel based approach, as all the objects in the space are being processed by a single reverb instance. In this case, the issue begins to fall more into the domain of acoustics and object-based spatialization - you may want to look into more advanced object-based spatializer effects that model reverberation and acoustic like Resonance Audio.

We couldn’t get a reverb with automation curves to hold those curve values when we selected another snapshot that used the same reverb. And we couldn’t just copy the reverb and paste it.

We used a blending snapshot to make distance automation curves that overrides all the other snapshot events and set different setting in the reverbs in other snapshot while still automating the early delay for example.

We got it figured out. Thanks!

1 Like