Impulse Responses - Proper Setup?


I’m trying to use a custom impulse response that was created by recording a sine sweep in a room, deconvolving that and applying it to a dry clap to get a .WAV file. I then setup a reverb return with convolution reverb effect added and drag the IR .WAV into the effect settings. The result is very harsh, tinny, and metallic. Here is the link to the IR .WAV:

I’m not sure what the issue is but it sounds nothing like the reverb in the room we captured the IR when A/B’d with a dry sample we played in the room. Could you please give any more details on this workflow (don’t see any docs)? What’s the best method for creating the impulse response .WAV from a signal sweep recording? What is the correct way to integrate it to get it sounding correct? Any guidance would be appreciated. Thank you!

Hi ,
Firstly i’ll ask what did you use to deconvolve your sweep. Have you first confirmed it sounds correct in a DAW like protools or ableton etc?

The main requirement for us is to match the sample rate of the wav used as the impulse, to the output rate of your project (ie 48khz).

If it is not, it could sound pitched or tinny. Make sure it is for best performance, 16bit , trimmed of silence and 48khz.

Hey Brett,

We recorded signal sweeps in the room and then use Altiverb to deconvolve. That gives us the .irbulk file and when applied to sounds within Altiverb it sounds correct. We’re then applying the impulse response to a dry clap and exporting as a .WAV and dropping that into FMOD. Made new export as 16-bit, 48khz and it still sounds wrong. Is there something other than a dry clap we should use for best result within FMOD? Thanks, Brett!

Hi ,
Oh i thought that was the resulting output wav of your reverb, ie a convolved output?
That wav is actually your input impulse??
You are only supposed to have a sharp impulse sound like a click be the source of your impulse recording typically. Its a bit out of the scope of the FMOD side, this is more to do with the generation of your impulse. Sine sweeps are typical but I assume the deconvolver is the last step to produce a wav that is a compatible input to a DAW or fmod.

You should be able to take that .wav and drop it into a daw’s convolution reverb plugin, rather than using the .irbulk file for example. This is what I meant by testing it outside of fmod.

Brett - that was the actual IR. I wanted to confirm that there wasn’t something I was missing as far as setup in FMOD. I’ll keep testing methods on the IR side of things. Appreciate the help!

Hi @jessekirbs,

If I understood correctly you are rendering your deconvolved impulse response with a clap sound and exporting that as an impulse response?

If yes, then that’s your mistake.

You can use a sweep signal or a clap (balloon pop, runner’s start guns, film clapperboards) signals to excite the air inside a space, to get the space’s response, and then you deconvolve it, which means that you remove the signal that you used as exciter and you are left only with the space’s response.

That later is used in the process of convolution, that simply multiplies each sample from your sound stream with all the sample from the (deconvolved) impulse response, layering all multiplications by adding them as many buffers in time, and that produces your acoustic simulation output.

By adding an extra exciter (the extra clap in your case), you actually undo the deconvolving that you done in the first place, to remove the sweep signal exciter. So the convolver in FMOD works ok, but your sound is in fact (and you describe it perfectly) metallic or harsh. The tiny-fication you also report might be that the added clap adds to the gain and the convolver has to normalize even more to achieve its layering, so the volume goes lower than expected.

If you want to export the impulse response to a native wave file to use in other software (like FMOD), instead of adding a clap on your DAW, try adding an impulse, which is only one sample with full volume. This will also allow for your exported impulse responses to have a 1.0 factor as their 1st multiplication layer in the convolution process, which sometimes works nice in convolvers used as inserts (instead of auxiliary sends) with 100% wet settings.

Here, I made two impulse files for you:!AoFZ1MP3ewRggeVrcTRCVG3LmC2ceg?e=KmhFCG

One is 32 bit floating point, and because I don’t know which DAW/editor you use I also made one at 24 bit. I also added enough silence to make the samples 1 sec long, as some DAWs or editors have issues with files that have a duration of only one sample.

For games as @brett suggests, it’s better to optimized your exported impulse file. Anything around 44.KHz /16bit would be good. In FMOD, as the convolver doesn’t support resampling of the impulses to the project’s rate, you should decide early on for the sampling rate of your final project and use that sampling rate for the exporting of your impulses. Pro tip: Don’t use dithering on the exports, for impulse responses it will mess with the fidelity of your final outcome.

@brett, if you are referring to an impulse signal as the “click” in your recommendation above, I disagree that it would make for a good sharp sound as an exciter in this case, because @jessekirbs mentions that he wants to capture the response of an actual space. Impulse signals are excellent for capturing hardware processors or software plugins, but they cannot provide the power needed for actual rooms capture. For real spaces you need either loud bursts of noise as you need both power and full spectrum to capture the room’s full response. Some pop balloons or use film clapperboards, or even start guns used in running competitions, that provide a good ratio of full spectrum to power excitement for the room to react.

I will leave you with a final tip that I found working for me great for game audio through the years, either create a monophonic impulse response, or make sure that the stereo reverb isn’t create any clearly perceived positioning of any sound you pass through it outside the center. Game sounds come from various directions as the games pan sources in real time. You don’t want your impulse response to always balance constantly to the right or left of the image of your output. Balance your wet sound to give enough character of the space, and your dry to allow the players’ understand from where the sound is coming.

I hope I helped, stay safe!




Amazing and detailed response! Thank you very much - this was extremely helpful and did the trick!

I’m very happy to be of help, have fun (and success) with your project!

In my team, the we believe that tasteful acoustics is mandatory to boost immersion. Apart form that, IRs is one of my personal hobbies :slight_smile:


That’s an interesting point, @Panagiotis_Kouvelis. I had the opposite reaction, when I found out the standard FMOD reverb plugin converts the input signal to mono (and thus doesn’t change the output in regard to the input panning). That means the more I send the signal to the reverb, the more I’ll lose perception of space. The same footsteps in a dry environment will be less localized than the same footsteps in a reverberated environment. I know it’s normal for a reverb to slightly loose the sense of panning (because of the sound bouncing in every direction), but here the loss is greater than it should be. That’s why I immediately choose to use the convolution reverb instead, with a stereo IR, which keeps a sense of panning in the reverb.

However, I recently played in my DAW with some standard stereo IR in Altiverb, 100% of the panned signal routed to the reverb (which can be useful for some very dry libraries) and found out changing the input panning had little to no impact on the output. This is quite disturbing. That could either indicate Altiverb doesn’t handle realistically spatialization, or that the panning essentially comes from the dry signal (as you said). At the other hand, a really interesting free VST plugin, Panagement 2 (specialized in spatialization), keeps a strong panning, even 100% wet in its reverb.

I’d like to have your thoughts on that (even if it outcompasses the strict FMOD subject).

In general you want the psychoacoustics of your system to serve your cause. For example, in games you don’t want to weight the sound according to the room you are simulating because misleading the listener is worse than blurring the stereo field more. If you mix classical music or acoustic jazz you might like to give an image as clear as possible. In game, it would be great to calculate the early reflections for each emitter, and the late reflections for the space around the listener. There are many advances made recently in game reverberation and some cool features that game technologies featured for more than 15 years now. All working towards more realistic listener envelopment and feedback. I always choose my simulation tradeoffs according to the project’s scope.


Correct my if I’m wrong, but it seems you’re saying that in cases where the emitter and the listener are in different acoustic spaces, applying a spatialized reverb on the emitter, based solely on the emitter’s space, could lend to inaccurate spatial perception for the listener?
Our actual project is much simpler, it’s 2D tactical, and every actors (including the listener) are in the same space at the same time. In that case, I just don’t wanted to narrow the stereo field when we’re in a dungeon level (due to the reverb applied on the diegetic SFX bus).

I don’t mean that. I just highlight that reverberation in nature makes localization more difficult. In games we always gave to trade accuracy and more realistic simulations for real-time performance, the effect is even more easy to disturb the brain’s ability to localize. That is why many games help by applying multiple literacies within one medium (multimodality). For example, arrow pointing to the enemy after the first hit to the player, or other extra-diegetic material.

To clarify what I stated before, reverbs duty is to mess things up regarding localization of a source, this is by nature as reverb is extra random reflections. But to keep things under control we mix the “right” level. If your game needs super-accurate localization and lush reverbs, you can organize a small focus group of test players and create a basic level to experiment on the best settings. Just put your players under three different reverb wet levels and see in which they achieve the score they should achieve as per the game design specifications.

1 Like