Studio preview feedback/questions


I’ve gone through the manual, example project and tutorial vids and have compiled some questions/comments. Sorry for the lengthy post, but I figured I’d mention everything that came to mind, in case it’s better to know about these things sooner than later. Also, sorry if I’m asking about things that are already documented but that I glossed over. I take it that many of these items are already on the to-do list, but I figure I should mention them in case it’s thought that no one uses such features:


  • Will templates be added, or does something that’s analogous to templates already exist?
  • Has the build process been implemented yet? I was able to click the build button in the example project, but it didn’t seem like anything happened. I take it we’ll be able to specify the build location (among other things) when it’s up an running?
  • I take it that encryption will be added eventually?


  • For the ‘Max Voices’ (aka Max Playbacks), when ‘Voice Stealing’ is set to OFF, I understand that this is analogous to ‘just fail’. Whereas when it’s set to ON, I assume that’s analogous to ‘steal oldest’? If so, I can probably live without having ‘Steal quietest’, ‘Steal newest’ or ‘Just fail if quietest’. I don’t believe I’ve used anything other than ‘Steal oldest’ or ‘Just Fail’, but maybe other people have.
  • Is it possible to automate or simply set the pan of an audio track (not the master track)?
  • Will there be more parameter properties such as velocity or seek speed? I sometimes want a more gradual transition from one parameter value to another.
  • Is it possible to add a ‘loop region’ to parameters other than the ‘Timeline’ parameter?
  • Having the ability to type in the position of a module in the timeline would be useful for complex events (something akin to ‘Sound Instance Properties’ in Designer). I once had to layout several sounds in a timeline to cover a cutscene, and being able to type in each sound’s position in the timeline helped tremendously. The zoom feature, that’s there already, should help with this a lot though.
  • Will something analogous to ‘Speaker Level’ be added? Sometimes after tweaking the front/back positioning of sound, I want to reduce/boost the centre channel volume. Maybe if you right click on a speaker in the surround panner, you can add automation to that on a per speaker basis? This could be handled via an independent automation track per speaker or it could basically add the ‘Speaker Level’ effect (from Designer) to the event.
  • Is it possible to add multiple volume ‘effects’ to a single audio track. I sometimes have one volume effect modulating the volume (say to create volume variations in an ambiance loop) and the 2nd one controlling the overall volume. Without this feature, or something similar, it will likely be very slow/tedious to adjust the overall volume of an audio track that contain many automation curve points. Alternatively, being able to grab all the automation points and drag them up/down in unison would likely solve this problem.
  • I take it that the ADSR for the Volume of the Master track of an event is how event fade-in/out is now handled (correct me if I’m wrong). That being said, I think there’s a bug with the Release portion of the ADSR. When I hit stop, the sound doesn’t fade out as per the Release value. The Attack works, but only if I start the event from the beginning of the timeline. If you press play with the timeline at a non-zero value, the Attack value is ignored. I take it that this Attack behaviour may be as-designed, but I’m not sure what advantage it offers.
  • Is there an ‘Occlusion’ type effect for 3D events that can be added to a Distance parameter. More specifically, is there a LP filter (and HP filter like ‘3D Auto Distance Filtering’) that I can use that’s uses low CPU, like ‘Occlusion’ does? Or maybe the HP and LP filters that are there are already as low CPU usage as Occlusion.
  • I tested out the importer and so far it seems that the most notable ‘loss’ during the import has been that my 3D events (with a ‘distance’ parameter) are basically empty. The modules are gone along with my custom automation curves that were present on my ‘distance’ parameter (e.g. Occlusion and 3D Pan Level automation curves). Also, my ‘Speaker Level’ and ‘Surround Pan’ settings/automation curves for my ambiances have been lost too. Of the time I spend in Designer, I’d say I spend a large portion of my time tweaking those parameters, so being able to import those reasonably well would make the importer more valuable.

Module Playlists (aka Sound Definitions)

  • Will many of the ‘Sound Definition’ properties present in Designer be brought over to Studio (such as ‘Spawn Time’ and ‘Trigger Delay’)? When creating an ambiance, I often use a ‘Trigger Delay’ range on the elements that will spawn (e.g. for bird one-shots), so that the first spawn doesn’t always trigger at the same time. I know I can delay a module by placing it later in the timeline, but the delay will always be the same, so being able to vary/randomize the delay is ideal.
  • Is there an equivalent way of adding a ‘Don’t Play’ to a sound module’s playlist (i.e. is creating an empty event via ‘Add Event Sound’ the way to do it)?
  • Will the Playlist %s eventually adjust like they do in Designer. Specifically, if I increase a file’s %, will it decrease the others? Right now, if I have one file at 40% and another at 60%, if I try to increase the 40% to 50%, it will auto-correct it back down to 40%.
  • Is the randomization in the Playlist a no-repeat randomization? That’s what I use most, so that’s fine, but eventually getting access to the other ‘Play modes’ may be useful. Specifically, ‘Sequential’ and ‘Sequential Event Restart’.


  • I’m thrilled that you added the side-chain compressor feature. Would side-chain compressors on things like mixer groups be the best way to handle volume ducking? Or are the mixer snaps-shot likely to be better for that (or a combo of both)? I think I’d lean towards the side-chain compressor since it reacts to the amplitude of the sound causing the ducking. Whereas, my understanding is that the mixer snap-shots will not factor in the amplitude of a sound causing a change in the mixer (i.e. a mixer snap-shot will be less precise on the attack/release).
  • Are compressors likely to be CPU heavy? For scenarios where I have a single event that has the potential to end up with a high polyphony (i.e. high potential for volume spikes), I’m thinking of using a compressor on the event’s mixer channel to keep the volume of too many simultaneous instances of such an event from getting out of control.
  • In what scenario would I use a VCA instead of a group channel? I don’t think I understand what a VCA offers.
  • I suspect this may already be on the to-do list, but a single button for turning effects on/off may be useful, than the current right-click->bypass method. Very minor importance though.

‘Audio Assets’ folder
If, for example, I need to boost the volume of many/every file in the my project, can I tell Studio to go update its copy of my files, or do I have to abandon my source files and edit the files in the ‘Audio Asset’ folder? Alternatively, can I deactivate the ‘Audio Asset’ folder or simply place my source files in there from the get-go, so that I don’t end up with two copies of the same file. I’m not sure what benefit the ‘Audio Asset’ folder offers, but I can understand that it may be necessary for Studio to function properly or for certain features to be possible. I’m just concerned that having this new intermediate asset location will complicate my workflow somewhat.

Cross-fades when looping
I’m not sure if the problem with gaps/pops/clicks occurring with granular synthesis has been solved with Studio, but I use granular synthesis to stitch together chunks of a long looping ambiance (with the objective of having chunks A, B & C being able to play sequentially without gaps between each chunk). If this gaps/clicks problem is not easy to solve, it would be nice if I could also do this via multiple modules, in series, cross-faded with each other. But I would need a way to cross-fade the last module in a looping timeline with the module at the start of the looping timeline. I assume this isn’t possible right now, but it might be a nice addition.


Thank you! You’ve guessed right: We really like to receive feedback for exactly those reasons.

There’s no functionality equivalent to templates yet. Until there is, we can only suggest that you copy and paste events.

The build process is a bit skeletal at the moment. Currently it makes a lot of assumptions about settings and places all built files in a folder called ‘build’ inside the project directory. Don’t worry, we will be fleshing it out in time.

Eventually, yes.

I think ‘ON’ is actually analogous to ‘Just Fail if Quietest.’ I’ll have to get back to you about that.

Not yet. We’ll get there.

Seek speed will eventually be added. Velocity, however, has been replaced by the behavior of the Timeline.

No, because loop regions don’t make sense unless parameter values advance automatically, and they only do that on the Timeline.

This is on our to-do list. We’re also planning to improve and expand the snap-to-position behavior that exists currently.

The ‘3D Panner’ module’s ‘Pan Override’ popup has some of the functionality that you’re looking for, but it’s still a work in progress. We’re planning to expand its functionality greatly, and yes, you will be able to apply automation to it.

I’m not sure what you’re asking for. Currently, you can automate the volume of a track on multiple parameters, you can adjust the overall volume of the track by using the knob, you can place volume automation on individual modules, you can add randomization as a modulator. We’re currently working on the ability to be able to select and drag multiple automation points. What do you need to do that isn’t on the list?

You’re actually seeing the intended behavior.
The reason why the attack doesn’t start when the timeline is at a non-zero value is to enable easy editing and auditioning. Under normal circumstances, events begin at zero when triggered. If the event begins at any other point, Studio assumes that you want to audition only that specific part of the event, without the attack. After all, if you were trying to fine-tune some volume automation, having to sit through an attack time would be greatly frustrating. (If you do for some reason want your events to start from non-zero points on the timeline, don’t worry. That feature will be added when we add the music system.)
The ‘Stop’ button behaves, by default, like the ‘stop’ button of many DAWs: Clicking it while the event is playing literally stops the play head, and causes all audio to cease. (This is approximately equivalent to the ‘pause’ button on a Compact Disc player). You can make simulate the event being stopped in-game by holding the button down until it begins to flash.

I’ll leave this question for Gino to answer, since he’s the expert.

Modules definitely shouldn’t be disappearing. Have you tried zooming in? It’s possible that the issue is related to scale. It’s also possible that the modules and automation of your event have been moved to a different parameter - have you tried clicking the various parameter tabs?

I’ll leave this one to Gino, as well.

Yes, we’ll be adding equivalent functionality to Studio.

We’re planning to add something equivalent to Designer’s ‘Don’t Play’ entry that allows you to specify the duration of the silence, which will make them more useful for sequential play modes.

Any playlist items that don’t have a set percentage are automatically assigned an equal portion of the weight not assigned to other playlist items. For example, if you have three playlist items and set one to 60%, the other two will automatically act as though their percentages are 20%. In effect, we already have the behavior you describe, but any playlist item whose percentage has been manually set is locked and cannot be indirectly reduced.
We may look at automatically reducing values that have been manually set, but we’ll have to work out how to do it in a manner that’s intuitive.

Yes, the randomization is a ‘no repeat’ randomization, since that just happens to be the most popular play mode for Designer 2010 events. ‘Sequential’ has also been added, since that’s the second most popular - just click on the button with a die to deactivate the randomization. We will add more playlist behavior settings as time goes on.

“There are no best practices, only good practices in context.” The answer to this question will vary hugely from project to project: Sidechaining dynamically adjusts itself from moment to moment, and is therefore excellent when your audio output is too chaotic to be predictable; Snapshots have to be designed for the situations they’ll be used in, but can afford be much more detailed and can be triggered by the parameter logic of events; And using a combination of the two grants a lot of flexibility and power, but will require more effort to set up.

Another one for Gino.

One the main advantage of VCAs is that they’re completely independent of your routing. You can add buses from completely disparate locations in your signal path to the same VCA without having to worry about messing up your effect chains, which allows you to add an extra layer of complexity to your mix with only a small amount of mental acrobatics. Also (and hopefully you’ll never need this), if requirements change near the end of a project and you suddenly need to control levels in a way that’s wasn’t apparent when you set up your routing, adding a new VCA is a lot easier than reorganizing your entire routing structure.

Hmm, we’ll consider it.

To update files in the audio bin to new version, select the group of files to be updated in the Audio Bin window, right-click on the selection, then select ‘Replace…’ from the context-sensitive menu. This will prompt you to select a directory. Studio will search this directory for files with the same file names as the audio assets in the selection, and will automatically replace the files you currently have imported with the new versions.
The main benefit of the Audio Asset folder is that it makes projects much more resilient. Many users found Designer’s Audio Source Directory setting obscure and hard to use, especially if they didn’t set it up until after importing a large number of files, and banks could very easily break if users stored their audio files on network drives or tried to move the project from one location to another. By automatically copying imported audio files into the project’s assets folder, projects become a lot easier to handle: if you want to work on your project on a different computer, you just have to copy it across, and all the audio files will go with it. It’s one less thing for users to worry about, and it compliments Studio’s revision control integration.
Still, if you really don’t like storing your audio files inside your project directory, there is an alternative. In the Preferences window, you can specify an ‘Audio Asset’ folder external to the project. Any files you import from this folder (or from its subfolders) will be externally referenced instead of imported into the Audio Asset folder, allowing you to edit the files manually.

This is actually part of our plans for the music system.

Thought, I’ve left a few questions for Gino to answer, since he knows our filters better than I.

If you have any more questions, send them our way!

Actually, this is possible. It’s a case of selecting the audio track, right-clicking on the output meter in the deck to bring up the output format context menu and selecting stereo or surround. There is a bug which is currently preventing automation on the stereo pan dial but we will fix this for the next release.

You won’t find that functionality in the 3D Panner. We will be adding a speaker level effect as well as panners you can place anywhere in the signal chain.

On the topic of the 3D panner, events which aren’t destined to be 3D positioned in the game don’t need a 3D panner, so you can remove it from the master track. We’ll address this workflow issue when we introduce template-like fuctionality.

Yep, I understand what you’re saying. We will be adding a ‘gain’ effect which you can place anywhere in the signal chain as well being able to manipulate parts or all of automation curves.

There is no occlusion effect, but there are two alternatives. The most similar sounding one is the 3-EQ effect which can achieve a similar result to Designer’s distance filtering when used as a bandpass (lows and highs at -inf and X-Slope at 12dB). It’s more expensive than distance filtering and after more optimisations it will only be marginally more expensive. The other alternative is low- and high-pass filters but these are currently much higher-order. We will eventually roll the lowpass, highpass and param EQ filters into one Parameteric EQ effect which will have selectable filter orders and will sound the same and use up the same CPU as the filters used in designer for the equivalent filter orders.

Oclussion level isn’t currently supported which probably explains why it’s missing but we will improve the importing process and map it to a filter curve in the future. I’m not sure why 3D pan level is not being imported. We’ll look into it.

These parameters currently aren’t being fully mapped but we will improve that once we’ve added a speaker level effect.

It will also depend on the ducking sound that you prefer. There will be other possibilities and combinations in the future such as sidechain modulators which will allow you to modulate any effect parameter from an envelope so you can filter out mids etc, or sidechain parameters where you can draw your own response curve for any effect parameter or trigger snapshots from envelopes.

Compressors are not cheap so I would limit them to group buses and use max voices on input buses if you have to.

VCAs are also cheaper than Group buses because they don’t create extra routing and only control volumes of existing buses.

Please keep sending us feedback!


This is a fantastic thread with really great questions! Rather than making a new thread (unless you guys prefer that) I’d like to make a suggestion that I believe could help with auditioning sounds in the “sound” part of the deck.

-Being able to click on the wave image on the selected sound should play that sound with the current volume and pitch parameters that have been set. Or, in addition, double-clicking the name of the audio file would do the exact same thing, or using the arrow keys to highlight a sound and pressing ‘enter’ to play it.
Personally I think all three of those options would be very intuitive for people.

Here’s a quick image.

-Another suggestion I have, which I know was not in Designer 2010, would be the ability to have Studio play all of those audio files at the exact same time when that event is fired.
I think right now you have to set separate tracks for each sound to achieve that effect. This makes a lot of sense since you have greater control, but it seems like a lot of extra work just to have those sounds play at the same time.
So, if we want to have volume and pitch randomization to happen every time that track is played, (with all accompanying sounds simultaneously being fired) I think a clickable option next to the “die” that says “simultaneous” or something to that effect would be huge! Maybe even controls that let you choose the maximum amount of sounds that would play back in that track.

-I noticed in Designer we have things like “play count” and “spawn time” in the sound defs. I haven’t noticed anything like this in Studio yet, did I just miss it? I know we moved away from sound defs, just not sure if those functions still exist elsewhere.

Wow, now that I’m writing these, the ideas are flowing :slight_smile:
-Take a look at that above image, specifically the area above the wave view that shows you the volume and pitch for that particular audio file. At first look, I thought that was where we set that audio files volume randomization and pitch randomization. It would be amazing to have that ability in addition to what’s currently there now. From first glance at the interface, it looked like that’s what it was (since in Designer both volume and volume randomization were both in the same place, the properties tab). I understand that to achieve the randomization, you must go to the event macro area of the deck, right click the pitch or volume knob, add modulation, then set that dial. And it’s automate-able, which is great. It just seems easier to have what I suggested in addition to what you guys currently have. Perhaps adjusting one, also affects the other, like they’re linked.

-I think it would be valuable if people can mouse over certain icons or areas on the interface, and it would bring up a little text display that tells you what it is your mouse is on. Like, mousing over the “dice” after 2 seconds or whatever would display what that dice is called or does, or maybe even when mousing over something, a text would pop up saying “press “F1” or “F2” to bring you to that area of the manual”, in which that would explain to you what that is/does.

-The scroll bar to scroll left/right in the “deck” is very tiny. Maybe mousing over the text area above the white scroll line of the deck and rolling your mouse scroll wheel will scroll from left to right, in addition to that functionality currently implemented for the white scroll bar. And/or making the white scroll bar thicker to be easier to grab and move.

-Perhaps right clicking the wave image of an audio file in the sound area of the deck could drop down a menu that allows you to see/highlight that particular file in the audio bin, where from there we can do things like edit that file in an outside editor (which I love how we can do that, thank you!)


Thanks for your feedback and suggestions, Vin!

We plan to be able to audition sounds at every level, both in sound modules and the audio bin. We’ll consider each of the 3 methods you suggested when we add that feature.

Once you’ll be able to spilling out sounds over tracks when dragging them in and multi-select items to bulk-edit properties, it will make this case easier. We’ll be introducing other sound modules more suited to simultaneous playback where randomisation on properties such as start delay or panning will also be possible.

We’re planing to add different randomisation modes which you’ll select on the pitch and volume dials themselves. The default mode on both dials for a multi-sound will be ‘per playlist item’ randomisation. The two number controls are intended for relative adjustments between items. Do you imagine having different levels of randomisation on each item?

Thanks again for taking the time and let us know your thoughts on this.


To answer more of your suggestions…

These features currently are not in studio but they are coming.

Yep, we’ll be displaying that info in the status bar. Good idea about having links to the manual. I’ll mention that to the documentation guys.

Good idea, we’ll look into that.

We will add this feature as well as other context menus to jump about related items in the interface.


Thanks a lot for the quick and detailed response. And thanks for bringing up those other items, Vin.

I on occasion will set up an event with 2 timelines. The best example that I can think of right now is one where I have the ‘Control Parameter’ advancing through a timeline and reaching a Sustain Point on a looping module. If I feel that the looping module could benefit from a bit of variation, I then add a second timeline parameter (that’s not the ‘Control Parameter’) and have that loop through some volume, pitch, pan or filtering automation. This helps the sustained loop from getting stale. So having the ability to add a 2nd timeline would be useful to me.

Since I brought up ‘Control Parameters’, will Studio eventually allow you to define what parameter is the control parameter (or maybe it already does)? I assume that in most cases the timeline parameter will be the control parameter, but I could see someone needing to assign this to another parameter.

In Designer, I’ll sometimes have 2 Volume curves/effects on the same event layer (one for modulation and one for controlling the overall volume). From what you said and from what Gino said, it sounds like I’ll be well covered for this. I also missed that I could control the overall volume at the module playlist level, but I guess this wouldn’t be as ideal if I had multiple different modules on the same track, or if that module was used elsewhere and I didn’t want to have the volume change affect the other events using it.

Thanks for explaining that. Sounds good to me.

I re-checked a few 3D events and can’t locate the modules or the 3D Pan Level . I can send over the FDP if you guys want to see if it happens on your end too.

What you described sounds pretty good to me. I didn’t realize that typing in a value was analogous to ‘Lock Percentage’ in Designer. Maybe having the percentages always visible & clickable/editable, and just renaming 'Set Play Percentage" to ‘Lock Play Percentage’ would clarify things? Or just wait and see if anyone else asks about this. I may not be representative of other users.

That sounds like it’ll work great for me. Thanks for the explanation.

I often run into problems where 2 or more instances of the same event can trigger simultaneously and I end up with a volume spike. For certain events, I can’t reduce the max voices because that event also needs to be able to play in a more staggered manner without cutting off earlier instances of the same event. Is there another way of dealing with this? Outside of a compressor, the only thing that I can think of is having a ‘cool-down’ period between voices. In other words, if I set max voices to 3 and ‘cool-down’ to 500ms, the 2nd and 3rd instances of that event will only be allowed to play if it’s been 500ms since the previous instance triggered.

Thanks again,

Some of the planned music system improvements will allow you to do something equivalent to what you’ve described using only the timeline.
That said, your use-case does illustrate certain potential usability concerns, so we will be sure to consider it as we continue Studio’s development.

Studio event tracks don’t have a control parameter. Or, to put it another way, every parameter is a control parameter. You may not have noticed, but whenever you create a trigger region or automation curve, it will be controlled by the parameter that it was created on. This means that you can create trigger regions on one parameter, and put more on a different parameter, and they’ll each be triggered by their respective parameters without any need for the special volume automation curves that were sometimes used to achieve the same effect in Designer 2010.

No problem! We plan to add tooltips and status bar text that will make these comparatively arcane features more obvious, but it will take us a while to get there. Thanks for being patient.

Ah, that would be a big help. Just send the .fdp to

It’s sort of a combination of ‘Set Play Percentage’ and ‘Lock Play Percentage,’ and we made it that way to reduce the number of steps it takes to set up your play percentages. As you’ve pointed out, though, it does currently interfere with the process of tweaking values, so we’ll definitely be looking at ways of making it more intuitive and accommodating.

What you’ve described is highly reminiscent of Designer’s ‘Just Fail’ Max Playbacks Behavior. Studio will have an equivalent of that. The cooldown idea is an interesting one - we’ll discuss it at our next design meeting.

If you have any more questions, please keep asking them!

Hi there,
When I open a previously created event, I noticed that the timeline view is zoomed out pretty far. I seems like it would be easier to scroll the mouse wheel on empty grey space on any of the tracks within an event to be able to zoom in and out. It seems like extra work to have to move the mouse down to the edge of the event birdseye view and drag that in order to zoom in and out.
Like other DAWs, perhaps this could also be achieved with buttons specifically for zooming in and out.

We already have a shortcut for zooming: Hover the cursor over the tracks, hold down the ‘Alt’ key and spin the mouse wheel. This shortcut is described in current versions of the manual.

Thanks for the responses, Joseph. I’ve finally sent over the FDP (sorry for the delay on that).

Sounds good. I hadn’t picked up on that feature regarding modules.

Thanks! An alternate approach to compressors would be great. I’m not sure how other games deal with this problem. I know some don’t because I hear such volume spikes in other games (say when 2 of the same enemy get killed at the same time). I suspect some games are dealing with this by having the game code limit how many instances of an audio event get triggered. We’ve done a bit of both here (try to control it via game code, and just live with it for other sounds).

Thanks again,

Are there plans for loading/saving effects presets?
Maybe even editable Firelight presets?

Hi Fmod Team and thx for this new release!
I’m rather green in the field and never used Designer before - I jumped directly onto Studio.
I went through the manual and the forum but couldn’t find an answer for this, so every help is truly welcome.
I created 2 events:
Event A:
Track 1: 2 sec. music (timecode 00-02)
Track 2: Event B (I nested event B by dragging it into the track). Time code 0.2- onward

Event B:
Track 1: “Win” music cue
Track 2: “Loose” music cue

Parameter Win-Loose
Track 1: Volume automation to unity gain when Win-Loose parameter is 0.0-0.5 When 0.5-1 Volume automation to - Infinity
Track 2: Volume automation to - Infinity Win-Loose parameter is 0.0-0.5 When 0.5-1 Volume automation to unity gain

Basically the Event B will playback either of the two music cues depending on the value of the Win-Loose parameter.
Supersimple, and it works by itself when you playback event B.

Problem is:
When I nest the event B in event A Track 2 and click on it, I visualize the Master of the track 2 which comes with a rotary knob for the Parameter Win-Loose. Make sense.
But that knob doesn’t really seem to affect the parameter at all. E.g. I can select whatever value I want on the knob but the parameter value seems always = 0. So only one of the music cues is played, regardless to the WIn-Loose value which I select by the knob.
Am I doing something wrong?
This happens also for whatever nested event I create.
I tried to use subevents but I don’t seem to be able to drag them into the parent event…
I’m kind of stuck here since days, couldn’t find a detailed explanation on the manual, so every help is truly appreciated! :slight_smile:

This is a bug. Changing the values of an event reference module’s parameter knobs in the deck should change the values of the associated event instance’s parameters, but as you’ve noticed, this isn’t happening. I’ve added this issue to our bug tracker; Thanks for reporting it.

well, i posted this about 10 days ago but here’s a couple more bugs in FMOD Studio for Mac, using 10.6.7 and the latest (2.3) preview:

  1. Cursor still disappears drawing automation in a track at a track boundary point (bottom especially). wiggling the mouse furiously can sometimes bring it back, but usually the only solution is to save, close, and reopen FMOD Studio.

  2. When creating an Event Module, if you edit the event and then navigate back up to the higher level the Event Module will attach itself and follow the cursor and move between tracks as you move the mouse. Clicking on the Event Module in a track releases this attachment but it’s definitely frustrating.

  3. Sidechaining appears to be broken when involving a prefader Send. i created a prefader Send/Return and assigned it to 4 Audio tracks. i turned down the Fader level of these 4 tracks so there was only the Send providing output to the Return track. the return was the Music track. then i created a VO track and put a sidechain output. i used this as a means of driving the Compressor on the Master bus for the event. the result was a distorted mess, but the Fader based audio played back without an issue. as far as i’m aware, i didn’t create a feedback loop did i?

  4. I noticed something in Trigger Behavior recently in a Event Module. setting the Time factor to a value for an event caused the audio to locate to that many seconds into the Event sound itself, which resulted in cut off audio beginnings. am i right in believing that Trigger Behavior should NOT be doing this? i thought it was a variable delay on the event or sound itself. please correct me if i’m wrong on this.

  5. Sometimes opening up a FMOD Studio document and then clicking on an event will reposition the window if it’s been made smaller than the max screen dimensions.

please at least mark 1 and 2 as bugs. they are repeatable and annoying.


Are there plans to incorporate automation of interval controls in the sound scatterer?

Thanks for all your feedback, it’s truly appreciated! There are fixes for these coming very shortly.

I’m not entirely clear on the details of your setup. If you send a copy of the problematic project to, we’ll be able to take a closer look and work out what might be going wrong.

For an Event Sound, it should be delaying the start of the contained event. Is this not happening? For timelocked sounds (i.e. single sounds that are not looping – sounds with the waveform drawn in the trigger region), delay will cut off the beginning of the audio file. We are still reviewing the best way to give users control over whether timeline sounds are timelocked (sample accurate) vs. triggered, as well as whether a module plays to the end of the audio file after the cursor have left the trigger region vs. cutting off.

This is typically due to the minimum size of the window growing larger than the current window size.

Certainly! Our long term plan is to allow automation of all properties (including toggle buttons, group buttons and range sliders). This may be some time down the road however.

OK adding fuel to the fire - i’m teaching game audio at the moment and using a two button trackball with no scroll wheel, so i second the key shortcuts option/alternative.

Couple of things to add:

for Single file playing events - might be nice to put a name label in the Deck area. alternately, right click on the waveform and give an option to link to the original, or auto highlight in the Audio Bin. i think something like this was discussed earlier, but right now, single file events have to be visually matched with the Audio Bin’s waveform representation - sort of an absurd concept to DAW centric newbies, though i know theres a different paradigm at work here.

Question - How do you specify an External Audio Editor? i’m on a Mac and it just opens the file up in iTunes at the moment. i suppose i could perhaps specify my preferences in the Get info dialog? i just tried that and it still wants to open in iTunes, even though clicking the file in the Finder makes it open in the other editor. is there a current workaround for this? might be nice to set a default audio editor in the Preferences - just saying…

anyway great job on the new app - it streamlines a lot of functionality and it’s good to now about the future options like being able to preview audio files and the Modules to control spawning time. The Timeline Editor tab is certainly convenient but it does cause a bit of an identity crisis. i was just trying to explain to my students how the X axis does not necessarily equal time and here you go confusing everyone again… :slight_smile:


The next release will feature items in the View menu and keyboard shortcuts for zooming in and out without using a scroll wheel.

Labels indicating the contents of single sound modules are on the cards. For now, we can only recommend that you manually assign memorable names to single sound modules by double-clicking on the tab in the top-right corner of the module’s deck.

We’re currently in the process of implementing this feature; I believe it will be included in the next release.