DAW to Fmod Worklow

Hello!

I am still just getting used to Fmod, and have a decent grasp on how I’ll go about adding simple SFX (ambience, footsteps, etc.), but one of my goals is to use it for adaptive music at some point, which seems to be the scariest challenge. While everything about this goal will undoubtedly present a huge learning curve for me, my question for now is this:

Let’s say I’ve recorded a song for my game in Logic Pro. Obviously I will have composed the song with different sections and parameters in mind, but I am wondering what I need to do more “technically” in terms of making the process as seamless as possible. For example, it would be ideal if I could drag and drop the whole logic project into Fmod, and then be able to cut and move around tracks as I see fit, but I assume that such compatibility doesn’t exist. Does this mean that I need to import, for example, every measure of every instrument into Fmod individually? If I don’t do this, I assume that just dropping the whole thing in there will simply render the entire song as one track/instrument.

I realize that my question isn’t very specific, but I guess I’m just looking for some insight on making the whole process of setting up the music for “parameterizing” in Fmod as easy and efficient as possible.

Thanks so much!

Lorenzo

Hi!
Adaptative music is the most exciting thing to do with FMOD. There’s several ways of doing adaptative music and each way would lead to a specific workflow. I surely still have a lot of things to learn, but from my experience, here’s the main ways to go:

  • Horizontal adaptive music (re-sequencing):
    Export one full mix for each sequence, with the reverb tail at the end to smooth out the loops. For an already existing music that you want to re-sequence, you could preliminary work in FMOD with the whole music, using loops/magnets/transitions, to test and determine appropriate sequences, before going back to the DAW to cut those sequences properly.

  • Vertical adaptive music (re-orchestration):

    • Volume automation across different instrument groups:
      Create one stem per instrument group in your DAW (if needed, copy your effect busses on each stem bus, otherwise they will be lost during export). Export each stem on one FMOD track and do your automation.
    • Cross-fade automation across several baked orchestrations/versions:
      Same as previous, but instead of stems you’ll have to export several full mix versions of your project, import each mix on one FMOD track and do your automation.
  • Procedural:
    The more procedural your FMOD project will be, the more granular will be the assets you’ll export from your DAW, and the more FMOD will take things in charge (mixing as well). In that sense, I’ve recently done one experiment, that may fit in our upcoming game, which you can look at here : semi-generative music.

Of course, each method could be combined. You could export each section (for re-sequencing), divided by several stems (for re-orchestration), and have a more generative track at the same time randomizing between some assets.

But I’d love to hear about the workflow of experienced users.

1 Like

Thank you!!! I’m even too “green” to be able to understand some of the terminology there, but luckily I will be working with an actual musician/producer, and this is a great start. Also that audio track you linked is awesome!

This Wikipedia page gives an explanation of vertical and horizontal adaptive music. Feel free to ask if something isn’t clear!

1 Like

It depends entirely on what you want your interactive music to do. FMOD has a number of different tools that can be used for adaptive music, and they each achieve different things and require different source audio assets to work.

If you look at the “Musc/Level 03” event in the examples.fspro project that comes with FMOD Studio, you’ll see a simple example of adaptive music that uses just a single audio file, but uses logic markers to loop specific regions of that audio file and to jump between those regions when the parameters are set to specific values.

By contrast, the “Music/Level 02” event uses a piece of music that’s been split into stems. Which stems are audible is determined partly by chance (each multi instrument contains two possible stems, and randomly selects which one to play each time the event loops), but also by automation, which ensures that ramping the parameter causes tracks to smoothly fade in and out of audibility. (Alcibiade’s incredible example is similar in some ways, but also uses parameter trigger conditions and quantization to ensure some instruments only play at specific parameter values, and scatterer instruments to achieve more complex randomization for some tracks.)

Wikipedia’s definition of horizontal and vertical adaptive music is interesting, but also potentially confusing for a composer new to adaptive music, as it conflates the changes in game state that might trigger an adaptive change with the kinds of adaptive change that could occur: This is no reason why a player’s movement within the narrative of a game could not cue phase branching, nor why a player’s choice of where they go in an environment could not trigger a change in mix.

Really, it’s best to start by asking yourself “what things that happen in-game should trigger a change,” and “how do I want the music to change when that happens,” as two separate questions. Once you know what the answers are, you have two separate tasks: Creating an event that changes the way you want when its parameters are set to specific values, and ensuring your game’s code sets those parameters to the appropriate values when your game enters the appropriate state.

2 Likes

Thank you to you both again. Very comprehensive answers, and lots to chew on. Cheers!

Fmod allows us to combine any type of music element, from a sample to a song part with any technique that is used in interactive audio.

The first step I propose for you is to approach each project with its needs in mind. Discuss that with the game designer and the programmer, or the writers and the level editors.

Each time I approach a new project, I first try to define what I call the “zoom level”.

The zoom level is how macroscopic or how microscopic your focus needs to be when you create music for the game.

If you need a “zoomed-out” approach, that means that you treat the player as the conductor of the music. Loops and parts and different tracks will play according to the player’s decisions, while stingers and cues will help “glue” the transitions and signal important game feedback.

If you need zoomed-in approach, that means that you treat the player as a musical instrument player. Player’s actions and decisions will trigger different notes and play significant role to the tiniest of details, while the general music might stay the same or follow the philosophy of the zoomed-out methodology.

You can combine zoom levels and techniques (vertical, horizontal, generative, or hybrid) freely. But the important thing to take care of, is for your choices to serve the design intent of the game, and to give the appropriate feedback to the player boosting the experience.

Fmod, and any other modern game audio middleware, also features real-time filters. So, if the lead programmer agrees on spending some extra resources on audio, and your design needs it, you can apply some extra processing on specific tracks depending on real-time game parameters or game states. How about rising the distortion on the snare as the life level of the hero falls, or a bass boost on the music’s group channel following an “action_level” variable which is set by a combination of the game state, how many enemies are near the hero, and how many bullets are left in its gun.

Exciting stuff!

1 Like

Exciting indeed, and thank you for all of these thoughts and ideas to chew on. Much appreciated!