hi Joseph! looks like you’re about to get real busy with the license of Studio changing and all…
anyway, i wanted to ask about an implementation question regarding multiple intensities of music loops, and how best to handle this. what i’m trying to do is use up a minimum of voices in the process. i’m aware of the recommended method of using multiple Audio tracks driven by a single parameter. what i’m less sure of is whether that uses up the same number of tracks as voices on the device (in my case that would be 7 total tracks). currently i’m handling this in Unity’s audio by using 2 Audio Sources and crossfading between them at precise sample points. so that’s one and occasionally two voices (at crossfade time).
what i’m curious about is whether the multiple track method on an Event is more or less efficient in terms of voice allocation, and if it’s considerably less efficient, what the most efficient method would be to crossfade between 7 different mixes of a single piece of music.