# Get how much volume sound has at a point

Is there a way to see how much “sound” can reach a point? Like I play a sound inside a room but I want to know how much of it reaches outside it or I play a sound in the open and I want to know how far it could be heard from based on the loudness of it at a certain time. This is for an AI sound detection system I’m working on.

We do not have any public facing API methods to assist with this, so you would need to calculate this yourself. I think an in depth discussion on acoustics may be beyond the scope of these forums, but at a high level the 3 main acoustic phenomena you want to explore are the Inverse Square Law for outdoor areas, the Room Constant for indoors, and the Sound Tranmission Class of different materials to determine occlusion.

• Inverse Square Law:
The default Spatializer rolloff mode is Inverse Square, and best represents what happense in the real world. In a free field this equates to ~6dB of attenuation for every doubling of distance. So if you are standing one meter from a sond source and it is 50dB, then at 2 meters it will be 44dB, at 4 meters 38dB etc.
• Room Constant:
Determining the dB intensity of a sound at a point in a room is trickier because of reverberation and the sound absorption characteristics of the materials in your room. Once you have these defined you can calculate the room constant of your room, and factor that into the previous inverse square calculation to determine the dB intensity at any point in the room.
• Occlusion:
The occlusion characteristics of materials can be approximated based on their Sound Transmission Class (STC rating), which equates to a ~1dB reduction per STC unit. A single pane glass window with an STC rating of 27 for example will reduce the sound intensity by 27dB, and a 20cm thick concrete wall with an STC rating of 72 will occlude 72dB of sound intensity.

Hopefully that will get you pointed in the right direction.

@jeff_fmod, isn’t it possible to set a virtual listener (which wouldn’t be routed to the real output) at the AI location? And then, isn’t it possible to retrieve the level of this virtual output, to base the AI behavior on that level?

1 Like

Would love an answer to this, it sounds like it would be the perfect solution.

Presuming you only had one listener, you could place it anywhere in your scene, and for each playing `Channel` in question call `Channel.getAudibility` for an estimaton of the volume at that point. Not sure that would be sufficiently accurate for your purposes, but it would be simple to setup.
As for occlusion, you could leverage the Core API Geometry System, which will reduce the volume of `Channel`s occluded by `Geometry` that you provide, and again use `Channel.getAudibility` to retrieve the `Channel`’s final volume, which will have occlusion factored in.

1 Like