wave9b
November 3, 2024, 9:40pm
1
so, this project I’m developing simulates a mixing table. It allows the player to control the volume, pan, high EQ, and low EQ of each channel from a song (8 channels total).
I need to create a function that when the user presses “R” for example, the music from the var EventReference (the only audio playing in the scene is from this EventReference) will be written as a .wav file by using WavWriter_NRT (just like bouncing the audio from a DAW)
I tried looking at how to use the FMOD_OUTPUTTYPE_WAVWRITER_NRT, but I couldn’t implement it properly in my project. I asked ChatGPT later to help me implement it, but in the chatGPT version the audio is always empty, unity crashes or when it has audio, it writes a huge file, like a 3-hour file.
Can anyone help me? Also, I’m new to FMOD
thanks in advance
wave9b
November 4, 2024, 1:59am
2
I found a solution, after debugging a little bit and asking help for my dev friend, we use IEnumerator, I’ll post the script later
1 Like
wave9b
November 4, 2024, 1:47pm
3
setting the path file is not working, but bouncing the audio from FMOD is working:
void update()
{
if (Input.GetKeyDown(KeyCode.R))
{
RenderOffline();
}
if (!isRendering)
{
StopCoroutine(RenderAudioInBackground());
}
}
public void RenderOffline()
{
isRendering = true;
StartCoroutine(RenderAudioInBackground());
UnityEngine.Debug.Log("rendering NRT started.");
}
private IEnumerator RenderAudioInBackground()
{
print("RenderAudioInBackground IEnumerator");
musicInstance.stop(FMOD.Studio.STOP_MODE.ALLOWFADEOUT);
FMOD.Studio.System studioSystem = RuntimeManager.StudioSystem;
studioSystem.flushCommands();
FMOD.System coreSystem = RuntimeManager.CoreSystem;
coreSystem.getOutput(out FMOD.OUTPUTTYPE output);
coreSystem.setOutput(FMOD.OUTPUTTYPE.WAVWRITER_NRT);
//coreSystem.setSoftwareFormat(44100, FMOD.SPEAKERMODE.STEREO, 0);
string outputPath = Application.dataPath + "/RenderedAudio.wav";
System.Environment.SetEnvironmentVariable("FMOD_OUTPUTFILE", outputPath);
coreSystem.setPluginPath(outputPath);
// Start the rendering process
musicInstance.start();
musicInstance.getDescription(out EventDescription eventDescription);
eventDescription.getLength(out int musicDuration); // millisecconds
musicDuration = musicDuration / 1000; // seconds
float elapsedTime = 0.0f;
coreSystem.getMasterChannelGroup(out FMOD.ChannelGroup cg);
cg.getDSPClock(out ulong child, out _);
coreSystem.getSoftwareFormat(out int sampleRate, out _, out _);
ulong future = child + (ulong)sampleRate * (uint)musicDuration;
while (child < future)
{
// Update FMOD systems
// studioSystem.update();
coreSystem.update();
elapsedTime += Time.deltaTime;
// Wait until the next frame
cg.getDSPClock(out child, out _ );
yield return null;
}
coreSystem.setOutput(output);
isRendering = false;
musicInstance.stop(FMOD.Studio.STOP_MODE.IMMEDIATE);
musicInstance.release();
UnityEngine.Debug.Log("Audio rendering completed and saved to: " + outputPath);
}
1 Like
Thank you for sharing the solution.