Fading In and Out Music Tracks

Okay let me run this by everyone. I’m new to FMOD Studio (never used Designer), and I’m trying to get my bearings best I can. What I want is to run 3 music tracks simultaneously (a melody + 2 backing tracks), so that they’re all in sync with each other. Then I want either parameters or cues set up so that the game can fade in and out any track at will. So for instance, one event in the game will cause the first backing track to fade in over 2 seconds, another will cause the second backing track to fade in over 1 second, yet another will cause the first backing track to fade out over 2 seconds, and so on.

It seems like a simple thing to do, especially since interactive music is a key feature for Studio, but I can’t seem to figure it out! Any help?

Hi
i was trying to attempt this today, and thought I had it set up correctly but I’m not hearing anything when auditioning the snapshot modules that are placed on the parameters (in the 2nd event). do the parameters need to be edited to something other than min=0 and max=1?

or maybe there’s another way to do this? I want to do something similar, I have 3 music files. all the same tempo.
“music1” is 4 bars and should always play
“music2” is 4 bars and should sometimes play, but must sync with ‘music1’
“music3” is 8 bars should play sometimes, but must sync with ‘music1’

so they all have ‘music1’ in common, but must sync up every 4 bars.

or is it just easier to play one at a time and include the common music parts in all 3 music files?

thanks

I would create three tracks of 8 measures.

1: music1 + music1
2: music2 + music2
3: music3

Then I’d add 2 parameters that automate the volume of track 2 and 3.

There are a number of ways you could do this, but the one we currently recommend is as follows:

1. Place each music track on a different audio track within the same event, and set the volume of the event’s master track to silent. On each of the audio tracks, create a send to a different return bus.
2. In the mixer, create one snapshot for each of the return buses you’ve created. In each of these snapshots, scope out every bus other than the one return bus it should be associated with, and set the volume of that bus to silent.
3. Back in the event editor, create a new event with one parameter for each of the return buses. On each of these parameters, place a snapshot module corresponding to one of the return buses.

What you should end up with is an event where each music track in your original event has a send to a different return, each return is controlled by a different snapshot, and each snapshot is triggered by a different parameter in the new event. There’s still no fading in or out, though, which is why the next step is crucially important:

4. Apply an AHDSR modulator to the intensity of each snapshot module, and customize these modulators to have the attack and release times that you need.

That should cover what you need. This method is also easily expandable; If you want to add more songs to your game’s playlists, just set them up as events the same was as you did the first one, and route their tracks into the return buses you already have.

Wow interesting, I definitely wouldn’t have thought of that. I’ll try that out, thanks a lot!

Hi Joseph Harvey, is this “Seek Speed” option avaliable only in FMOD Studio 1.3? or in 1.211 as well? Thanks

I assume you mean 1.02.11; In which case, the answer is somewhere between ‘yes’ and ‘no.’

Seek speed has technically been supported in Studio since shortly after its initial release, in an under-the-hood sort of way, in order to support migrated Designer projects. However, the controls for setting seek speed only appear in Studio in versions 1.03.00 and later. You can add seek speed in earlier versions by manually editing the project’s xml, but upgrading to a 1.03 version of Studio is probably an easier option.

Thanks for the feedback, but I’m having a little trouble with the updates.

If I use the 1.2.11 FMOD Studio version, and the Unity Integration 1.02.09 and by following the example in this page: viewtopic.php?f=30&t=18688 i’m able to modify the sound with the parameters, ok.

But if I update FMOD Studio to 1.3.1 or any above it, and also the Unity Integration 1.03.01(using exactly the same things I used in with the older version) I keep receieveing error messages like “object reference not set to an instance of an object” in many lines of the code.

Once again thank you.

to be mor specific:

here’s the code and the messages i’m receiveing:

using UnityEngine;
using System.Collections;
using FMOD.Studio;

public class FMODFredSound : MonoBehaviour {
FMOD_StudioSystem soundSystem;
FMOD.Studio.EventInstance zoom;
FMOD.Studio.ParameterInstance amount;
float pos = 0;

void Start () {
	
	soundSystem = FMOD_StudioSystem.instance;
	
	string fileName = Application.dataPath + "/StreamingAssets/Master Bank.bank";

	FMOD.Studio.Bank bank;

	FMOD.Studio.Bank bankStrings;
	
	zoom = soundSystem.getEvent("/music");
	zoom.start();      I'M RECIEVING THE ERROR HERE SAYING: NullReferenceException: Object reference not set to an instance of an object      FMODFredSound.Start () (at Assets/Scripts/FMODFredSound.cs:25
            zoom.getParameter("music part", out amount);
	
}

// Update is called once per frame
void Update () {
	
}

void OnGUI()
{
	if (GUI.Button(new Rect(100,100,200,30),"Modify sound"))
	{
		pos += .1F;
		amount.setValue(pos);
	}
}

}

and also this message at the start of the game :

Expected event path to start with ‘/’
UnityEngine.Debug:LogError(Object)
FMOD.Studio.UnityUtil:LogError(String) (at Assets/Plugins/FMOD/FMOD_StudioSystem.cs:58)
FMOD_StudioSystem:getEvent(String) (at Assets/Plugins/FMOD/FMOD_StudioSystem.cs:155)
FMODFredSound:Start() (at Assets/Scripts/FMODFredSound.cs:24)

Thank you!

We changed the naming of event paths, they need to be prefixed with “event:”, so if you change this to “event:/music” that should resolve the issue. The error message that says it needs to start with “/” is out of date and has been updated for the next release.

Done! Works fine!

Thanks for the support!

At this point in Studio’s development, what is the best method to achieve this? My team and I are hoping to achieve a similar effect - a parameter called “Progress” would have the ability to fade in and out the volume of each track, so that with more “Progress” comes a more elaborate song.

The method described above would work; You’d just need to control all the snapshots from a single parameter, instead of having one parameter per snapshot.

That said, depending on what your requirements are, there may be simpler methods that you can use. For instance:

1. Create an event with one audio track for each music track you want to play. Place one of your sound files onto each audio track.
2. Create a parameter called “Progress.”
3. For each sound module, set its Trigger Behaviour logic to include the “Progress” parameter’s value being in a certain range.
4. Apply an AHDSR modulator to the volume of each sound module.
Optionally place a tempo marker on the audio track, and specify in each sound module’s trigger logic that it should use an appropriate quantisation interval.

This method is less complicated and resource-intensive than the one I described above, but it assumes that you’re either using timelocked sounds or that you don’t mind music tracks starting at different times.

Alternatively, if you want to have very precise control over volume while keeping all your audio perfectly in sync, you could do the following:

1. Create an event with one audio track for each music track (as in the previous example) and a “Progress” parameter.
2. Place a music track on each audio track.
3. Automate the volume of each audio track; Create an automation envelope on the “Progress” parameter that sets each track to be audible when the parameter is at the appropriate values.
4. Give the “Progress” parameter a seek speed other than 1. (Seek speed is the maximum rate at which a parameter’s value will change in a second; To set it, click on the parameter tab until the parameter’s module appears in the deck, then set the value of the “Seek Speed” knob.)

Which method you should use really depends on what you want and need to achieve; Without knowing more about your project, I can’t really give any more specific advice.

I wonder if it would be helpful when using the “Progress” technique described to have a parameter’s “Hold” control could have something like a Trigger Behaviour control? In fact it would be more like having a “Sample” control in addition to “Hold” (as in “Sample and Hold”). This feature could then be triggered by Markers in the Logic Tracks on the Timeline.

This would be means the sound/music designer could easily identify safe places for automation transitions to occur. That is although a parameter might have been changed by something in-game its actual value controlling the audio wouldn’t take effect until the timeline hit the start of a bar a beat or even hit a Marker on the Logic Tracks.

That sounds like a pretty cool idea. I’ll add it to our tracker.

Great, thanks. It’s a technique I used to use in Designer-based projects by putting WAV cue markers in the source files then in the code adding a callback to listen for the cue markers. Then when changing parameters in the game, only allow the parameter change to actually happen when a specific cue name was hit in the audio file.

It would be cool to be able to do all this without needing a programmer to implement it.

Usefully, I notice Studio lets you used the same Marker name multiple times which would be helpful for the technique i.e., “sample this parameter when the timeline hits a marker named ‘A’”.

Being able to have more than one marker with the same name in an event is actually a known bug, and a future version of Studio will enforce unique marker names. Your idea of being able to tie parameter updates to specific points on the timeline is good, though; I’ll suggest we adopt something of the kind when that feature is being designed.

I just had a go at implementing this, what an intro to the mixer and routing structures in Studio!

[list=1][]Do I need to add the original event event from step 1 as an Event Reference Module in the new event in step 3? Otherwise I couldn’t get any sound. One issue here is that syncing only works if playing from the start of the event making auditioning difficult an time-consuming if the interactrive music structure becomes complex.[/:m:mtc7ii1m]
[]The only way I could find to add a bus to the mixer was to add a send (say from the master bus) then immediately remove it (as I don’t really want a send from the master bus). Is that the expected behaviour. I’d expect to be able to right-click in the mixer and “Add Return Bus…” or something like that.[/:m:mtc7ii1m][/list:o:mtc7ii1m]

:lol: I thought as much. I guess that could be really confusing using the Add Transition To… options!

You don’t need to add the event as an event reference sound module. Unfortunately, FMOD Studio currently doesn’t make it easy to audition more than one event at a time; We’re aware of this, and are working on a number of improvements that should help. In the meantime, you can audition more than one event at a time - and thus audition both your event and the snapshots that affect it - by opening each in a separate event editor window. (As a side note, having multiple events isn’t strictly necessary; You could easily put the snapshot trigger regions and the parameters that control them in the same event. Keeping them separate makes it easier to apply the same snapshots to other pieces of music in your game, however.)

Yes and no. As long as you use timelocked sound modules on the same timeline, they will be kept in sync by virtue of Studio’s rule of always playing the part of the waveform that the timeline cursor is over. If you use non-timelocked sound modules at any point, then you will lose that benefit. Quantization may serve the same purpose in some cases, but not all; It depends on the requirements of your event.

Incidentally, we do plan to introduce new kinds of timelocked sound module in future, but for now, single sounds are the only sort available.

You should indeed be able to right-click and create a return bus from the routing browser context-sensitive menu. It is odd that you could not; What version of Studio are you using?