Changing a SudioEventEmitter's Event During Runtime

I initially posted this on an old topic ( here ), but I wanted to avoid necroposting if it was preferable to have a fresh topic nearly 2 years later.

The linked thread discusses the exact problem I was having today; I have multiple StudioEventEmitter components each on their own child object of my entities. I would like to be able to easily swap out the current event reference associated with that emitter during runtime in order to keep things tidy while affording me the control I want over the audio. This is not possible by simply changing the EventReference associated with my StudioEventEmitter actionAudioSource, i.e.:

public void Footstep() 
{
        if (actionAudioSource != null)
        {
            actionAudioSource.EventReference = actionFootstepEvent;
            actionAudioSource.Play();
        }
    }

The above thread contains a solution on how to implement a method to manually look up and change the reference, so that’s great! But where I continue to be a little confused (and slightly nervous) about when it comes to integrating FMOD Studio into my project are best practices. I realize that every project has different needs, but that’s not much of a barometer for me with what little understanding I currently hold over the nuances of implementing this audio solution.

For me, a new FMOD Studio user reading up on how to call audio events in Unity, what caused me to eventually decide on a solution of limited emitters for a given entity for each audio context (one for actions, one for voices, one for getting hit, etc) was that I could not wrap my head around the advantages of PlayOneShot (Not held in memory, but more limited out of the box re: controlling parameters), versus creating instances (held in memory but not automatically released, must do own GC).

I very well may be mistaken; this is my current understanding based on poring over documentation, as well as other developers’ YouTube videos, blogs, and forum posts here. I was very surprised that it took me long to find this particular method of having a dedicated emitter from which you can load the appropriate events. (Which Just Works :tm: with Unity’s default AudioSource / AudioClip interaction)

public StudioEventEmitter voiceAudioSource; //Used for any voice callouts this entity may have.
public StudioEventEmitter actionAudioSource; //Used for emitting effect sounds related to attacks and such.
public StudioEventEmitter hitAudioSource; //Dedicated sound emitter for hit and damage effects.

public EventReference voiceAttackEvent;
public EventReference actionAttackEvent;

...

image

For me as a game designer, this seems like a logical, if not simplistic (though I am making a retro styled game with relatively simplistic audio anyway) solution.

cameron-fmod, who graciously provided a workaround in the linked thread, then said:

After a lot of internal discussion, we came to the conclusion that this was not the ideal way to use the StudioEventEmitters.

I can accept this, but for the sake of education, could you go into a bit more detail as to why this approach may not be ideal? I want to be sure I’m not setting my project up for a situation where I shoot myself in the foot. :slight_smile:

Love FMOD Studio and there’s no way I can go back, but on this most basic point, I’m not as confident as I’d like to be. Thanks in advance!

To your confusion around PlayOneShot vs EventInstance - you can use PlayOneShot in tandem with creating and managing your own instances, it’s simply about what you’re using each of them for. Managing your own events gives you a lot of freedom with how you approach audio implementation, but requires you to potentially be managing a reference to them, updating parameters and 3D attributes, doing garbage collection, etc. PlayOneShot and PlayOneShotAttached are useful when playing audio that you don’t want to actively manage, which reduces the amount of coding you need to do.

To paraphrase what Cameron said in the thread you linked, StudioEventEmitters were designed to handle single events and to be an example of directly interacting with the API yourself once you’ve outgrown them, however they’ve had more and more functionality added to them over time.

Besides the fact that you’re effectively creating your own fork of a StudioEventEmitter by modifying it, there’s nothing wrong with using them as you’ve described - much of the more “complex” functionality that a StudioEventEmitter can’t handle itself can be achieved by using your own scripts in parallel, and accessing its public fields as you need them.

You can definitely implement a similarly solution for your game without using StudioEventEmitter - you can replicate your “one emitter to one audio context” structure by keeping a reference to an EventInstance for each audio context instead. Obviously, it’ll take a little more coding since you’re handling a lot of what a StudioEventEmitter does manually, but the StudioEventEmitter script itself as well as basic Unity script example in our docs are good references, and you’ll have a lot more freedom to customize the behavior of your audio.

Huh, I somehow completely missed that page listing scripting examples and will peruse that. Thanks a ton for the detailed explanation, you guys rock!

1 Like