Looping "daisy chain" playlist overhead?

I’ve tried making ‘equipment handling rattle’ type sounds by cutting handling sounds into tiny little short bits that I load into a Multi-Sound playlist and set the playlist to loop, so that those short sounds play one after another in random order over the length of the trigger region.

Just wondering… does this kind of daisy chaining of sounds cause extra overhead, if the samples are very short and rapidly play one after another? How far is it feasible to go? (for example, divide a diesel engine sound into single cycle wavelengths? Too fast, too much?)

We’ve had one odd case of one such sound getting orphaned and ending up playing forever on a console. It was a high overhead situation overall with lots of physics happening, and it’s only happened once so far. But still, just checking.

There shouldn’t be any problem with using extremely short assets in looping multi instrument playlists. FMOD Studio’s scheduler is able to handle it.

I haven’t been able to reproduce the “orphaned sound” issue that you describe. Is there any more information you can give us about the circumstances in which it occurred?

Thank you for the verification. Is there any difference in performance between playing a single looped sample vs daisy chained samples from the playlist? What happens if the Multi-Sound is pitched up a lot, will it cause buffer read issues if the scheduler has to jump between samples?

That “orphaned sound” was a freak incident, it only ever happened once. I’ll check if there was any more to that incident than just overall heavy overhead on the game side (combat taking place, lots of physics, projectiles and visual effects occurring simultaneously).

Is there any difference in performance between playing a single looped sample vs daisy chained samples from the playlist?

Assuming all the samples are all the same length and set to a loading mode other than streaming, playing a daisy chain of samples will require more audio assets to be loaded into memory. Since you’re using extremely short samples, however, the added cost is not likely to be significant. (It’s unlikely that you might try using streaming loading mode for this use-case, as it’s the exact opposite of the situation that loading mode is designed for; I mention it only for completeness.)

What happens if the Multi-Sound is pitched up a lot, will it cause buffer read issues if the scheduler has to jump between samples?

No buffer read issues. The scheduler takes care of it.

The scheduler is so called because it schedules changes in advance, so as to ensure each new audio file is loaded and able to start playing at the exact right moment, instead of having to wait for update() to be called. It achieves this by predicting what is about to happen, based on the current state of the project: If it can see that a looping multi instrument playlist is going to finish playing its current playlist entry within the next few milliseconds, it selects the next playlist entry and schedules it to start playing at the appropriate moment. If it sees that that playlist entry will finish playing too, it selects the next entry and schedules that to play a little further in the future, and so on. This allows you to do things that would otherwise be impossible… Such as playing hundred of tiny audio files in rapidfire succession without noticeable gaps or seams. (Scheduling things in advance does mean that there’s technically a tiny amount of latency between making a real-time change and that change being applied to the audio, but it’s negligible under most circumstances.)

There is one potential performance-related issue that might result from using tiny audio files in a looping playlist: Each new audio file played by a multi instrument counts as a voice from the moment it is scheduled. Most multi instruments’ audio files are are longer than the scheduler delay, so this isn’t significant - but if your audio files are shorter than the scheduler delay, you may find that the instrument consumes more voices than other instruments while playing.

1 Like

Hello again, aka topic bump. I’ve now switched to another project and tried the ‘daisy chain playlist’ method again, since it worked so great in the previous project.

However, this time around the playback is gapped, as in there is a gap of silence after every sample, even though the samples are not particularly short, none are set to streaming, and there are no silent tails in the samples themselves (they are precisely cut to length). The very same samples play gaplessly one after another in the other project.

Are there some project specific API buffering / lookahead / scheduler settings that may be causing this difference between the projects, or what else could be causing this? Both projects are using the same version of FMOD Studio, 1.10.04.

Erm… actually they don’t. Just tested in again, and those same samples sound gappy in the previous project as well. In a sample editor those samples are gapless, and they play gaplessly if imported to Reaper (for example).

I got it fixed, to behave, by going to the Asset Bin, and by enabling, then again disabling Streaming in all the wavs in the playlist. Now they play gaplessly.

Side note: in both these projects, the wav assets are in a shared network drive instead of being checked into PerForce.

To conserve resources, FMOD Studio only loads the sample data for assets into memory when it needs it. This can result in small delays when playing auditioning an event containing an asset for the first time during a session. Be assured that this loading behavior only affects auditioning in FMOD Studio, and has no effect on in-game behavior, which is governed by how and when events and banks are loaded by your game’s code.

Toggling the streaming behavior of the assets would have caused their sample data to be loaded into memory, explaining why the gaps disappeared. There are several other methods that would have a similar effect, including auditioning the assets, and auditioning events that made use of the assets.

By “auditioning events” do you mean using Sandbox? Because playing the event in the Event Editor is gappy. But I can try playing the wavs in the Asset Bin first, then playing the event in the Event Editor, to see if it stops the gappiness.

In any case, it is workin gaplessly in the game so no problems there.

My apologies, I should have been more clear.

If you select an event in the event editor, the asset sample data used by that event is loaded into memory. In some cases, it can take a few moments for this loading to finish; auditioning when the loading is not yet complete can result in the “gappy” playback that you’ve observed.

However, once you’ve auditioned the event, the associated asset sample data remains in memory as long as the event is still open in at least one tab of the event editor window’s editor pane. If you audition the event again without first deselecting it, the audition should be gapless.

If you want to compare multiple different events while auditioning them gaplessly, you can add use multiple tabs in the event editor window, or multiple event editor windows.

Thank you for the clarification! Yep, looks like I was impatient. Waiting for a while after selecting an event, before playing it, seems to fix the gappiness in preview/audition. :grinning:

Is there a minimum length limit for the samples after all? I’ve been working on a gatling gun sound with this granular method, where the samples are about 36 milliseconds in length (but they do vary a bit). It sounds different between FMOD Studio (1.10.04) and the game. Seems like there is a gap between each sample on the game side.

Here is an example mp3 (Google Drive). First in Studio, then in the game. There is a sine wave together with the sound, to verify that the event is not getting pitched by a 44.1k vs 48k error or anything like that.

I have already tried the following:

  • Downsampling to 32k for shorter buffer reads
  • Changing the encoding to PCM (in case there’s a byte boundary glitch with Ogg, which should not be the case)

… but no avail.

What could be causing this?

Could you send us the project you’re using to reproduce this behavior? It’s difficult to pinpoint the cause of the problem just from that .mp3.

I should mention that the way in which FMOD Studio auditions events differs from the way in which the FMOD Engine plays events: FMOD Studio loads loose audio files from the asset bin, in accordance with FMOD Studio’s methods for doing so and using the assets’ existing encoding formats, whereas the FMOD Engine loads sample data from banks, using a loading strategy determined by your game’s code and encoding formats set by your game’s banks. Without access to a project that exhibits this issue, I can’t know whether this difference accounts for the behavior you’re observing, but it is a possibility.