Fading In and Out Music Tracks

Thanks for all your help on this.

Got it. My fault, I should RTFM. Right-click in the Routing Browser rather than the Mixer itself. Thanks.

@joseph
I am creating a music loop that plays between 3 separate songs, that play in sync but never together. I have everything working perfectly (transitions do not repeat position of loop) and I’m using mixer snapshots on a parameter sheet to transition. My issue is as follows:

On the AHDSR Intensity modulation for each snapshot in the parameter sheet, if the attack value is >0, then even when transitioning between songs, the other track (which should not play) will play for a moment. (i.e. transitioning 1>2, 3 will gain volume on the mixer and be heard). If the attack value of the same control is = 0, then the ASCENDING transitions (1>2>3) crossfade nicely, but DESCENDING transitions (3>2>1) happen instantly and sound very abrupt. This is reflected on the mixer, where I can see the value snapping when descending.

Thanks in advance!

I am addressing this issue in the other thread you have created.

Sorry to resurrect an old thread, but this is the first return on Google.

The original answer is many years old. Is this still the prescribed method for fading individual tracks in and out? We also need to do this for adaptive music. Individual instruments are played for each character.

I believe I can do this by creating a new parameter for each track and then using that parameter to manipulate that track’s volume, but i do feel like this may not be the correct route since you have to create a bunch of parameters.

Even when this thread was new, there were multiple different ways to fade individual tracks in and out, each with their own advantages and disadvantages. That’s why three different ways of doing it are discussed above.

All three of these ways are still valid, though some of the terminology and tools involved have changed over the years: “Sound modules” have since been renamed “instruments,” the term “timelocked” has been replaced with “synchronous,” the “hold” parameter setting has been replaced by the “Hold value during playback” preset parameter setting, and so on.

Whether this route is correct depends on your project’s requirements - which is to say, there’s no One True Way: If a method produces the behavior you want and doesn’t cause other problems, it’s a good method to use.

The main advantage of automating the volume of each individual instrument is that it gives you perfect control over the volume of each individual track, allowing you to set each of your tracks to any volume at any time. This is useful if you want to use that degree of fine control in your game, but may be unnecessarily complex otherwise.

If you just want to be able to ramp your characters’ instruments between silent and full volume, a different method might be more appropriate.

That being said, within an event instance, a parameter can only have one value. Thus, if there are multiple things in an event that you want your game to control independantly, you will need one parameter for each of those things. This means that, if you want to be able to control each of your characters’ tracks without affecting the behavior of the others, you definitely need one parameter for each character; methods will differ in how those parameters are used, not in how many parameters they require.

What exactly is the behavior you want to achieve? Do you need to support more states for each character than just “character is present in scene” and “character is not present in scene”? Do you want characters’ tracks to always fade in and out at the same speed, or do you sometimes want them to fade in and out more gradually or abruptly than other times? Do want the ability to set the volume of different characters’ tracks separately? Do the characters’ tracks all have the same loop length, or do you need to keep them in sync despite being different lengths? Does your event make use of any timeline logic beyond basic loops that might complicate things?

Ahh OK so it sounds like using a parameter per instrument would be appropriate in our case.

Sorry I didn’t explain that very well. This is actually for music, and each character in the game has a leit motif played by specific instruments that fade in and out depending on whether their personality is present in the room.

We’ll probably be fading in and out at the same speed, but having the latitude to change the speed could be useful. The instrument tracks’ volume will need to be set separately. They don’t all have the same loop length, but the instrument tracks must stay in sync because it’s music.

We’ve got intro and outros for the songs beyond the main loops so that might be more complicated logic, but I’m not certain.

Thanks for explaining all of this in detail!

In that case automating the volume of each instrument on a different parameter is a good option. (Putting each instrument on a different audio track and automating the track volumes would also work, and may be preferable if your music requires multiple instruments per character). This method will allow your game’s code to set the volume of any character’s leitmotif to any value at any time, and thereby give your game complete control over the time and speed at which those themes fade in or out.

There are a number of FMOD Studio features that can be used in a variety of combinations to keep music tracks in sync:

  • Synchronous instruments make it possible to seek (or “scrub”) within an audio file, allowing you to start playing that file from any position in that file’s waveform.
  • Asynchronous instruments play their content from the start when triggered, meaning that an synchronous instrument will produce the same behavior no matter which part of it is overlapped by the playback position. This can be used to allow loops of different lengths to play in sync.
  • Assigning quantization trigger behavior to an instrument forces it to only start playing on specific beats and bars, as defined by the event’s tempo marker(s).
  • Transition markers, transition regions, magnet regions, and loop regions can all be used to make the timeline playback position jump from one location on the timeline to another. This can be used to loop sections of the timeline or to seek the playback position to a specific point inside the audio file of a synchronous instrument. Transition regions and magnet regions can be quantized in the same manner as instruments.
  • The start offset instrument property allows you to start playing an instrument’s content from a point other than the start of that content, even if the instrument is asynchronous.

Each of these features is detailed in more depth in the FMOD Studio User Manual. It’s impossible for us to know exactly how you’ll need to use them, as we don’t know enough about the event you’re trying to create, but we’ll be happy to answer any specific questions that you have.

Oh, this isn’t related to synchronization, but most platforms can only support a single-digit number of simultaneously-playing streams, so be sure to set your music assets to not stream if you want to avoid playback issues.

Awesome! Thanks so much for all the detailed information. Based on what you’ve said, I think it makes the most sense for us to use individual parameters per instrument.

It’s also really good to know about the quantization triggers and the limit on streaming assets. Thanks so much for all the details. I think we should be good to go!