Not able to store the wav file with specified name using FMOD.OUTPUTTYPE.WAVWRITER in unity

Hi,
I am usinf FMOD + unity and trying to create a wav file(with specified name) using the FMOD.OUTPUTTYPE.WAVWRITER method through a script, which is attached to a microphone. Always I am getting the file name fmodoutput.wav.

I am having the following codes
string fname = “xyz.wav”;
FMODUnity.RuntimeManager.CoreSystem.close();
FMODUnity.RuntimeManager.CoreSystem.init(128, FMOD.INITFLAGS.NORMAL, IntPtr.Zero);
FMODUnity.RuntimeManager.CoreSystem.setOutput(FMOD.OUTPUTTYPE.WAVWRITER);
FMODUnity.RuntimeManager.CoreSystem.init(128, FMOD.INITFLAGS.NORMAL, Marshal.StringToHGlobalAnsi(fname));

If I do not use close(), I am getting the file created as fmodoutput.wav and getting error “ERR_initialized”

If I use .close(), I am getting the following error
[FMOD] assert : assertion: ‘connectionsRemaining == 0’ failed
[FMOD] assert : assertion: ‘isEmpty()’ failed
[FMOD] assert : assertion: ‘realInstance’ failed
[FMOD] assert : assertion: ‘realInstance’ failed
[FMOD] assert : assertion: ‘realInstance’ failed

Could you please support me, identifying the issue

Firstly it’s not safe to close the System object while everything is running in Unity, there is no mechanism to reload everything properly.

Some platforms permit switching the output safely at runtime, i.e. they don’t return ERR_Initialized, which platform are you on?

However switching output to wav writer at runtime is not compatible with setting an output filename via extradriverdata. I will log a bug for this issue.

Thanks Mathew,

If I do not use the close(), then I will get the ERR_Initialized as below
Error in writing wav file : ERR_INITIALIZED.

I used the following code the capture the error
var result = FMODUnity.RuntimeManager.CoreSystem.init(128, FMOD.INITFLAGS.NORMAL, Marshal.StringToHGlobalUni(sentence));
if (result != FMOD.RESULT.OK)
{
Debug.Log("Error in writing wav file : " + result);
}

I am using Unity 2019.1.0f2 with FMOD 2.00.09 in windows 10.

I am using Unity + FMOD for some sound simulation. I need to capture the output of source(event emitter), and 2 microphones. So, totally, I need to create 3 wav files simultaneously. Currently, if I do not specify the file name, only one “fmodoutput.wav” is created in project folder.

could you please let me know, is there any alternative way, so that I could capture all 3 files

Thanks a lot

That is correct, you cannot call System::init while the System is running and you cannot call System::close safely for Unity so using that approach will not work for you. You can however modify our script code where we initialize the FMOD System for the first time, setting it to wav writer at this point will work. However you cannot switch between speakers and wave file with that approach.

To capture individual streams of audio and save to separate files is a pretty advanced topic, you’ll need to create a capture DSP and attach it to the DSP graph at the desired spot. You’ll also need to write out a .wav header and the collected bytes.

Mathew,

I am able to get dsp now using the following command
thisSoundEventChannels.getDSP(0, out dsp);

I am not sure, how to attach the dsp to dspgraph and also to write .wav header file.

Could you please provide some pointers to some sample for this

Thanks

To create a DSP you’d need to use System::createDSP and pass in an FMOD_DSP_DESCRIPTION you’ve defined. The main callback you must implement is DSP_READCALLBACK, that will receive the audio signal. Once created the DSP can be added with ChannelGroup::addDSP.

For writing a .wav file you’ll need to read up on the spec, for example: http://soundfile.sapp.org/doc/WaveFormat/

As you can see this is a pretty involved task, that will require a fair amount of research. I’ll make a note of this for us to provide a Unity component in our integration that wraps this functionality up for you.

Mathew,

Thanks for for your reply.

I have written the CreateDSP and callback methods as below by using the document and some examples

static FMOD.RESULT CallbackMic(ref FMOD.DSP_STATE S, IntPtr In, IntPtr Out, uint Len, int ChInCount, ref int ChOutCount)
{
    FMOD.DSP dsp1;
    channelGroup.getDSP(0, out dsp1);
    dsp1.setMeteringEnabled(true, true);
    dsp1.setActive(true);
    dsp1.setBypass(false);


    FMOD.DSP Handle = dsp1;

    IntPtr Data;

    dsp1.getUserData(out Data); // <- Stack overflow

    return FMOD.RESULT.OK;
}


private FMOD.DSP CreateDSP(FMOD.System system)
{


    char[] nameArray = new char[32];
    FMOD.DSP thisdsp = new FMOD.DSP();
    FMOD.DSP_DESCRIPTION dspdesc = new FMOD.DSP_DESCRIPTION();
    String dspname = "sample dsp                      ";         
    FMOD.DSP_READCALLBACK dspreadcallback = new FMOD.DSP_READCALLBACK(CallbackMic);
    
     
    dspdesc.version = 0x00010000;
    dspdesc.numinputbuffers = 1;
    dspdesc.numoutputbuffers = 1;

    dspdesc.userdata = (IntPtr)GCHandle.Alloc(this);

     
    dspname.ToCharArray().CopyTo(nameArray, 0);
    dspdesc.name = nameArray;

    dspdesc.read = dspreadcallback;

    FMOD.RESULT result;
    result = system.createDSP(ref dspdesc, out thisdsp);
    result = thisdsp.setActive(true);
     
    return thisdsp;

}

But I have already created the DSP by using the createDSPByType as below using FMODUnity to get volume and it is working fine.
FMODUnity.RuntimeManager.CoreSystem.createDSPByType(FMOD.DSP_TYPE.FFT, out fft);
fft.setParameterInt((int)FMOD.DSP_FFT.WINDOWTYPE, (int)FMOD.DSP_FFT_WINDOW.HANNING);
fft.setParameterInt((int)FMOD.DSP_FFT.WINDOWSIZE, WindowSize * 2);

    FMODUnity.RuntimeManager.CoreSystem.getMasterChannelGroup(out channelGroup);
    channelGroup.addDSP(FMOD.CHANNELCONTROL_DSP_INDEX.HEAD, fft);

I made the “channelGroup” as static public and this is already used in CallbackMic method.

Could you please let me know, whether this will work
and how to call these 2 methods. Now, it is not doing anything

Are you expecting the “thisdsp” needs to be added to the same channelgroup.

I think, as next step, I need to look at writing into .wav files

Thanks

It’s always good practice to check the error codes returned from each function to ensure everything is working correctly.

If you are interested in getting the spectrum of the the signal, using the FFT DSP the way you have should work, you’ll need to add code to query the spectrum of course.

If you want to get the bytes of the signal, you will need to add your “thisdsp” in a similar fashion to the FFT DSP. In your CallbackMic, you should not make calls to getDSP or setActive etc, this callback is for processing audio in the mixer thread.

I cannot provide code level support here on the forums but I hope this advice gets you going in the right direction.

Mathew,

Thanks. I did the changes as you suggested, may be I have done some mistake.
I do not see any error.
Do I need to write into .wav file at call back method

Thanks

It’s best to save the samples to a separate buffer so you’re not writing to a file in the callback, but you can for testing purposes.

hey melango

Could you please share the code for getting the bytes of the signal here. I am also confused about this problems for a long time. Appreciate for your help!

Hi,

Sorry I was on vacation and could not see your message
I tries to reuse some method from the net as below for the callback.
I initialised the call back as below
dialogueCallback = new FMOD.Studio.EVENT_CALLBACK(DialogueEventCallback);
and called it using
dialogueInstance.setCallback(dialogueCallback);

Now I am able to hear the sound that is added(played) to the original
I need to know, which switch case is playing and how to write into a file. Could Mathew please help

static FMOD.RESULT DialogueEventCallback(FMOD.Studio.EVENT_CALLBACK_TYPE type, FMOD.Studio.EventInstance instance, IntPtr parameterPtr)       
{
    // Retrieve the user data
    IntPtr stringPtr;
    instance.getUserData(out stringPtr);

    // Get the string object
    GCHandle stringHandle = GCHandle.FromIntPtr(stringPtr);
    String key = stringHandle.Target as String;
    

    Debug.Log("type test : " + type + key);
    
    switch (type)
    {
        case FMOD.Studio.EVENT_CALLBACK_TYPE.CREATE_PROGRAMMER_SOUND:
            {
                Debug.Log("type test01 : " + type);
                FMOD.MODE soundMode = FMOD.MODE.LOOP_NORMAL | FMOD.MODE.CREATECOMPRESSEDSAMPLE | FMOD.MODE.NONBLOCKING;
                var parameter = (FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES)Marshal.PtrToStructure(parameterPtr, typeof(FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES));

                if (key.Contains("."))
                {
                    FMOD.Sound dialogueSound;
                    //var soundResult = FMODUnity.RuntimeManager.CoreSystem.createSound(Application.streamingAssetsPath + "/" + key, soundMode, out dialogueSound);
                    var soundResult = FMODUnity.RuntimeManager.CoreSystem.createSound("c\\temp\\elan12.wav", soundMode, out dialogueSound);
                    if (soundResult == FMOD.RESULT.OK)
                    {
                        parameter.sound = dialogueSound.handle;
                        parameter.subsoundIndex = -1;
                        Marshal.StructureToPtr(parameter, parameterPtr, false);
                        Debug.Log("type test 11 : " + type);
                    }
                }
                else
                {
                    Debug.Log("type test 12 : " + type);
                    FMOD.Studio.SOUND_INFO dialogueSoundInfo;
                    var keyResult = FMODUnity.RuntimeManager.StudioSystem.getSoundInfo(key, out dialogueSoundInfo);
                    if (keyResult != FMOD.RESULT.OK)
                    {
                        break;
                    }
                    FMOD.Sound dialogueSound;
                    //var soundResult = FMODUnity.RuntimeManager.CoreSystem.createSound(dialogueSoundInfo.name_or_data, soundMode | dialogueSoundInfo.mode, ref dialogueSoundInfo.exinfo, out dialogueSound);
                    var soundResult = FMODUnity.RuntimeManager.CoreSystem.createSound("c\\temp\\elan13.wav", soundMode | dialogueSoundInfo.mode, ref dialogueSoundInfo.exinfo, out dialogueSound);
                    if (soundResult == FMOD.RESULT.OK)
                    {
                        parameter.sound = dialogueSound.handle;
                        parameter.subsoundIndex = dialogueSoundInfo.subsoundindex;
                        Marshal.StructureToPtr(parameter, parameterPtr, false);
                    }
                }
                
            }
            break;
        case FMOD.Studio.EVENT_CALLBACK_TYPE.DESTROY_PROGRAMMER_SOUND:
            {
                Debug.Log("type test2 : " + type);
                var parameter = (FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES)Marshal.PtrToStructure(parameterPtr, typeof(FMOD.Studio.PROGRAMMER_SOUND_PROPERTIES));
                var sound = new FMOD.Sound();
                sound.handle = parameter.sound;
                sound.release();
            }
            break;
        case FMOD.Studio.EVENT_CALLBACK_TYPE.DESTROYED:
                // Now the event has been destroyed, unpin the string memory so it can be garbage collected
                stringHandle.Free();
                Debug.Log("type test3 : " + type);
                break;
            
    } 


    return FMOD.RESULT.OK;
}

For capturing audio I’ve posted some code in another thread you might find useful:

As for how to write a .wav file, that is beyond the scope of support we can provide here consider looking to StackOverflow for advice.

Mathew,

Thanks for your support. Now I am able to create the .wav files and play it in some other tool.

My idea is to create 3 wav files. So, I created 3 scripts and attached to various objects
1st script(creating s1.wav) attached to a gameobject, which is having the event emitter, source of the sound (speaker)
2nd script(creating M1.wav) is attached to another game object(mic1) which is (2 mtr) away from the source
3rd script(creating M2.wav) is attached to another game object(mic2) which is (4 mtr) away from the source

Now, I am trying to import all 3 .wav files into audacity to find the time at which the sound signal is received. I am seeing all the signals are started at the same time.
I was expecting the delay of (distance/speed of the sound) starting the sound signal varies for S1.wav, M1.wav and M2.wav

Could you please let me know, where did I make the mistake?

I used the following code

    FMODUnity.RuntimeManager.CoreSystem.getMasterChannelGroup(out masterCG);
    FMODUnity.RuntimeManager.CoreSystem.createDSP(ref desc, out mCaptureDSP);
    masterCG.addDSP(0, mCaptureDSP);

Thanks

3 posts were split to a new topic: How to delay sounds based on distance