Suppose we have the following room layout, where we have rooms A, B, C, and D, separated between them with doors.
Let’s assume I have emitters on every room and they are assigned to a separate bus for every room. The listener is able to move between rooms, and I’m able to know where she is. To simulate what she should hear, I could implement mixer snapshots “inside_a”, “inside_b”, and so on, that will activate when she enters the corresponding room. So when the listener is “inside_b”, I could modulate the sound coming from A, C, and D, depending on the state of the doors.
So far so good. But what happens if the listener is, for example, “inside_a”? I can modulate the sound “coming from” B, but that sound not only must integrate also the emitters on C and D, but honoring the state of the doors too.
You can see that the combinations are many and I can’t find a good way to simulate this that works on the 100% of cases, without using occlusion, which let’s say I can’t do for the moment as I have no access to the backend. I only get the data “inside where” and “door state” as parameters in Studio.