Audio Structure for Project

Dear All,

I am not a sound engineer and have little budget for sound design, so i thought i’d ask here for help. Sorry its long.

I have watched all the FMOD tutorials, but am stuggling with the initial structure for my project. It has 3 main areas of play: Strategy View, Planetside View (each with a unique score and tempo), Battles, Actions and Summary. Each of these areas of the game we want the music to project an emotional response such as calm, frantic etc.

The music tempo changes between scenes… is it therefore best to have a single timeline with different tempo markers on it? or can we jump between different event timelines that have their own unique arrangement, tempo and instruments?

Also any suggestions on workflow to get all sounds in a scene unified where there is harmony between them is appreciated. I understand dynamic range, so i’ve set volume levels for each clips perceived loudness to be roughly the same within FMOD (ship engines, lazer round, shields active feedbacks etc), i then send them to their own dedicated bus (engines bus for example) and i can manipulate all engines on the channel fader when being played amongst other audio assets in the scene. Is this overkill and likely to cause performance issues as it will result in a lot of buses?

Of course i can set audio levels in my DAW (Audition) but each time i do, i’d be reducing its quality, so best to set them in FMOD, right? I also want to EQ loads of the clips to get the sounds perfect but im sure that would kill CPU. Again i could do this in my DAW but its a much longer workflow.

Lastly i have a reverb on my rain effect on one planetside level but the effect is heard when exiting the planet. Is there a way to cut off the effect during transition between planets and space or must i create the cut-off myself in the parameter timeline i.e. set reverb wet level to 0 at 0 and 1 on the timeline?

I have a thousand more questions but i will keep schtum. I appreciate any help.
Thanks in advanced.

Heres a rough schematic of how our audio is setup. it does not include all separated sound effects to their unique busses, but gives an indication of some.

You seem to talk about a film music, with a linear approach. The purpose of FMOD is precisely to dynamically “jump” from a state to another. It’s highly unprobable you’ll use a unique event for those different scenes, but the level of granularity you’ll choose is a matter of taste. You can either have different sections in a unique timeline, or having different events for each scene, triggering the switch either in the code or in a parent event. If you tell more about the project, we could suggest the best approach.

Having a bus for each category of sound, is a perfectly normal and efficient workflow.

I don’t think it’s true (or if it’s true, it’s fairly theoretical). It’s a simple matter of workflow and personnal preference.

If you want a dynamically ajustable EQ, for some reason, put it in FMOD. If not, just bake it into your audio assets with your DAW and save that CPU footprint.

Yes, you could automate the reverb with a parameter. Also think about using the reverb on a mixer return, instead of individual tracks/events, if you have several events to send to it. You’ll save some CPU. In that case, the automation can be with a snapshot.

(I’ll check your diagram later)

1 Like

Thank you so much for the reply it really helps just to have some direction.

Blockquote You seem to talk about a film music, with a linear approach.

Well I have friends that are music artists and each has created a score for their given faction. The idea was to then break down each score into stems that would allow for a dynamic soundtrack when the player is on each of the 6 planets. But with all the different instruments each artist has supplied, i imagine it would become messy having everything in one event, called Music with a number of transition regions for each planet and so on…

Hence my suggestion to transition to another music event (score) so i can keep things workable. Of course, if anyone has another idea of how ot best implement i’d be grateful to hear.

Can you explain how a parent event would work? Our game is phase based- 3 phases to each day with a summary at the end. Each phase will have its own score, some dynamic such as battles, but others such as summary phase- where players are not engaging and are looking at the scoreboard, are not. So calling an event at the start of each phase sounds logical.

Thanks again!

I’m not certain either but an audiophile gave this argument:

If our assets were quiet, they may sound fine at that level, but when we turn them up it may make the noise floor or unwanted noise noticeable when it wasn’t before.

Which i kinda understand. So i’m guessing the best approach is to always maintain the audio at its loudest level throughout the signal chain until it needs to be balanced, at say, the SFX-type group bus, which is what i’ve been doing.

Yeah, but:

  1. raising the level in FMOD or in your DAW won’t make any difference
  2. raising the level isn’t the problem, if your signal/noise ratio is bad the problem is in the source material

This kind of thought could make sense in the analogic era. In the digital world, it won’t make any noticeable difference to lower then raise the signal along the chain, since the process is internally calculated in 32 bits in any DAW (not exactly sure for FMOD internal bit depth).

1 Like

You can drag and drop (or create) entire events inside an event. And you can go any level deep in this process. But some will prefer to have the instruments (FMOD instruments) directly in the timeline rather than in nested events, it’s often a matter of taste. I like to encapsulate things into events and reuse them as bricks.

We could ask whether the events should be called directly by the code or programmed in a parent event in FMOD. If the transitions between those events should be “musically” programmed and timed, it should be done in a parent event in FMOD. I not, it also could be done in the game code (exemple, quitting the start menu music, then starting the main game music).

1 Like

I should mention that a parent event and child event do not share tempo, so this approach does have limitations, especially if you’re planning to make use of varying tempo and quantization.

Whether it’s “best” is a subjective judgement. Each approach has its own advantages and disadvantages. Putting everything in one event means that that one event will have a relatively heavy resource cost, as all the assets used in that event must be loaded to play the event, but means you can easily take advantage of quantization to ensure on-beat and on-bar transitions; whereas splitting your music up into separate events gives you greater control over loading but limits your ability to have transitions occur on the beat.

As alcibiade says, routing events into group buses so that you can manipulate those events as a group is standard practice. In fact, it’s what group buses are for: Because of the way the mixer is designed, it’s usually cheaper in resources to put an effect on a group bus than to put it on each of the individual events routed into that group bus.

Wrong. Or at least, wrong under most circumstances. Adjusting the amplitude of a waveform normally has no effect on its quality, but if there were exisiting issues in the audio file, making the file louder may make those problems easier to hear. Of course, if there are existing problems in the source audio file, they’ll become more obvious when you make it louder regardless of whether you do so in a DAW or in FMOD Studio.

The CPU cost of an effect (such as the 3-EQ and Multiband EQ effects) depends partly on how many simultaneous instances of the effect are running. Thus, if you place an effect on every track of an event, it’s more expensive than placing a single effect on the master track of that effect; if you place an effect on a group bus, it’s cheaper than placing an effect on every event that routes into that group bus; and so on.

That being said, an effect that’s not in your FMOD Studio project has no CPU cost, so it’s always cheaper to bake effects into your source audio files, provided that you don’t need them to apply dynamically.

If you’re concerned about the CPU cost of effects, we reccomend using live update to connect to your game and recoridng a session in FMOD Studio’s profiler window. The profiler window contains a number of tools designed to allow you to see how much resources your game consumes.

There are a whole bunch of ways to do this, but automating the reverb wet level on a parameter sounds like a direct and effective solution: Parameters are a tool for telling FMOD about changes in your game’s state, and automation is a tool for defining how your game’s audio should change in response to specific changes to your game’s state.

As a rule of thumb: If you want two sound files to be triggered by the same in-game cue, to play in perfect sync, or for one to do something with perfect timing with respect to the other, they should be in the same event.

Thank you very much for your detailed answers Joseph, its all very helpful.

I still feel it might be best to keep the scores as separate events as there are transitions between the views of around 3 seconds. So i’m hoping things don’t need to be exactly quantized between scores.

Also noted about the reverb channel on the master. Have now implemented into my signal chain, i noticed less load on cpu too.

Thanks again to both of you for all the help, i’m alot more confident now about tackling the audio in eonwar. PS. is there a way to expand all folders in the event panel when searching a keyword?

As of the time of writing (June of 2021), there is no way to expand all folders while searching the events browser.

However, if you click the “Flatten” button at the bottom of the browser, it’ll display just the events without their containing folders.

1 Like