Can't record with Unity Recorder

I want to record game footage and audio with the Unity recorder, but now that I’m using FMOD, I can’t anymore. How can I record my game in Unity? The Unity Recorder did work well because it slowed the timesteps to preserve 60 fps, producing a great, smooth video.

I will add that recording with OBS (a) does not do this, producing inconsistent framerates that make the video far worse and (b) doesn’t have usable audio as we’re developing on MacOS and OBS can’t access system audio.

If this isn’t solvable, my team will have to dump FMOD. The ability to share footage of the game is critical to talking about it and promoting it.

3 Likes

Unfortunately the Unity Recorder doesn’t hook into FMOD audio, and that’s not really something we have control over. You should be able to hook in system audio from Unity into OBS using Loopback, but if OBS isn’t suiting your needs in terms of video quality there’s not much else I can suggest.

that this isn’t working is such a shame :expressionless: any updates in 2024?

1 Like

It is possible to pass audio data to Unity through an OnAudioFilterRead callback.
You can get audio out of FMOD in various ways, I think the simplest would be to implement a custom DSP, implementing a FMOD_DSP_READ_CALLBACK that copies all of FMOD’s audio data into an intermediate buffer. You could then copy that buffer in the OnAudioFilterRead callback, and get FMOD outputting to Unity, and thus through the unity Recorder.

Here is a basic example:

ScriptUsageTap.cs
//--------------------------------------------------------------------
// Basic FMOD to Unity Audio example.
// This script copies audio data from FMOD's master channel group to
// Unity's audio buffer using OnAudioFilterRead.
// Prerequisties:
// 1. Unity audio is enabled.
// 2. The GameObject also contains a Unity AudioSource.
// 3. There is a Unity AudioListener on a different GameObject.
// 4. Your master bus is built with a Stereo channel layout.
// Use of this script will incur one mix block of latency.
//--------------------------------------------------------------------

using UnityEngine;
using System.Runtime.InteropServices;
using System.Collections.Generic;
using System;

[RequireComponent(typeof(AudioSource))]
public class ScriptUsageTap : MonoBehaviour
{
    private FMOD.DSP_DESCRIPTION mDSPDesc;
    private FMOD.DSP mDSP;

    private List<float> mBuffer = new List<float>();
    private bool mRunning = false;

    private void CHECK_RESULT(FMOD.RESULT result, [System.Runtime.CompilerServices.CallerLineNumber] int sourceLineNumber = 0)
    {
        if (result != FMOD.RESULT.OK)
        {
            Debug.LogError(String.Format("Call to FMOD API failed with result: \"{0}\" line: {1}", FMOD.Error.String(result), sourceLineNumber));
        }
    }

    void OnAudioFilterRead(float[] data, int channels)
    {
        if (!mRunning)
            return;

        if (mBuffer.Count >= data.Length)
        {
            // Copy from intermediate buffer into Unity's audio buffer
            Array.Copy(mBuffer.GetRange(0, data.Length).ToArray(), data, data.Length);
            mBuffer.RemoveRange(0, data.Length);
        }
    }

    private void Start()
    {
        mDSPDesc = new FMOD.DSP_DESCRIPTION();
        mDSPDesc.numinputbuffers = 1;
        mDSPDesc.numoutputbuffers = 1;
        mDSPDesc.read = (ref FMOD.DSP_STATE dsp_state, IntPtr inbuffer, IntPtr outbuffer, uint length, int inchannels, ref int outchannels) =>
        {
            float[] tmpBuffer = new float[length * inchannels];

            if (length > 0)
            {
                // Copy to intermediate buffer
                Marshal.Copy(inbuffer, tmpBuffer, 0, tmpBuffer.Length);
                mBuffer.AddRange(tmpBuffer);

                // Silence FMOD output
                Array.Clear(tmpBuffer, 0, tmpBuffer.Length);
                Marshal.Copy(tmpBuffer, 0, outbuffer, tmpBuffer.Length);
            }

            return FMOD.RESULT.OK;
        };

        CHECK_RESULT(FMODUnity.RuntimeManager.CoreSystem.createDSP(ref mDSPDesc, out mDSP));
    }

    private void OnDestroy()
    {
        CHECK_RESULT(FMODUnity.RuntimeManager.CoreSystem.getMasterChannelGroup(out FMOD.ChannelGroup group));
        CHECK_RESULT(group.removeDSP(mDSP));
        CHECK_RESULT(mDSP.release());
        mRunning = false;
    }

    private void Update()
    {
        FMODUnity.RuntimeManager.CoreSystem.getMasterChannelGroup(out FMOD.ChannelGroup group);
        if (group.hasHandle() && !mRunning)
        {
            mRunning = true;
            CHECK_RESULT(group.addDSP(0, mDSP));
        }
    }
}

if you want to support more channel layouts you will have to implement some custom downmixing behavior yourself.

Maybe you can use one of the suggested external softwares that are mentioned by Unity.

The snipper tool for windows also records the system audio and depending on your audio configuration from window you may be able to record surround sound and not just Stereo.
For surround sound I don’t think you’d need an audio interface with 6 or more outputs as long as you can configure it from the audio configuration panel from windows.
Also you could use Voicemeeter VB-Audio VoiceMeeter
This software will act as a virtual mixer with enough outputs for you to monitor your audio in stereo or surround. You can set up a loop for your audio from FMOD output to voicemmeeter back into the input of snippertool.

Here is script that I use for my proj. I was stumbling into same issue and had to develop custom solution to be able use Unity Recorder and record and record Unity Audio.

Attach Camera Recorder to camera which you wanna use to be recorded. Note that it must NOT be you gameplay camera, create a new one and place it where you want.

System requires Unity Recorder to be installed. Also it requires FFMPEG to be intsalled and you will need to set path to it in the inspector. FFMPEG is used to combine recorded result.

Component separately records Video and Audio from FMOD, and then combines them into one file. You can choose whether to delete or keep original files in case you want to compose it manual.

Files are stored in Proj/Recordings

Note: Since Unity Recorder is Editor only tool, this approach wont work in build. You will also need to enable Unity Audio for propper use of Unity Recorder (typicaly disabled with FMOD)

After recording is done script waits for amount of seconds set in “waitBeforeTryToMerge” before perform merge. This is due no callback from Unity Recorder once file saved and stored on disc.

CameraRecorder classs

using System;
using UnityEngine;
using UnityEngine.InputSystem;
using System.Collections;
using System.Diagnostics;
using System.IO;
using UnityEditor.Recorder;
using UnityEditor.Recorder.Encoder;
using UnityEditor.Recorder.Input;
using Debug = UnityEngine.Debug;

namespace Recorder
{
    [RequireComponent(typeof(Camera))] // Ensure a Camera component is attached
    public class CameraRecorder : MonoBehaviour
    {
        [SerializeField] private int width = 1920;
        [SerializeField] private int height = 1080;
        [SerializeField] private float frameRate = 60; // Set target FPS to 60
        [SerializeField] private string outputFileName = "RecordedVideo";
        [SerializeField] private InputActionReference recordingAction;
        [SerializeField] private bool recordAudio = true;
        [SerializeField] private bool deleteSourceFiles = false;
        [SerializeField] private float waitBeforeTryToMerge = 2.0f;
        [SerializeField] private string pathToFFMPEG = "/opt/homebrew/bin/ffmpeg";

        private Camera cameraComponent;
        private RenderTexture temporaryRenderTexture;
        private RecorderController recorderController;
        private AudioRecorder audioRecorder;
        private bool isRecording = false;
        private string outputDirectory;
        private string videoFilePath;
        private string audioFilePath;

        private void Start()
        {
            cameraComponent = GetComponent<Camera>();

            if (recordAudio)
            {
                audioRecorder = new AudioRecorder();
            }

            recordingAction.action.Enable();
            recordingAction.action.performed += context =>
            {
                if (isRecording)
                {
                    StopRecording();
                }
                else
                {
                    StartRecording();
                }
            };
        }

        private void SetupRecorder()
        {
            var fullName = Directory.GetParent(Application.dataPath)?.FullName;
            if (fullName != null)
                outputDirectory = Path.Combine(fullName, "Recordings");
            Directory.CreateDirectory(outputDirectory);

            var dateTimeSuffix = DateTime.Now.ToString("MM.dd.HH.mm.ss");
            var outputFileCombined = $"{outputFileName}_{dateTimeSuffix}";
            
            videoFilePath = Path.Combine(outputDirectory, outputFileCombined);
            audioFilePath = Path.ChangeExtension(videoFilePath, ".wav");

            // Initialize the temporary RenderTexture
            temporaryRenderTexture = new RenderTexture(width, height, 24);
            temporaryRenderTexture.format = RenderTextureFormat.ARGB32;
            cameraComponent.targetTexture = temporaryRenderTexture;

            var renderTextureInput = new RenderTextureInputSettings
            {
                RenderTexture = temporaryRenderTexture
            };

            var videoRecorderSettings = ScriptableObject.CreateInstance<MovieRecorderSettings>();
            videoRecorderSettings.name = "CustomVideoRecorder";
            videoRecorderSettings.Enabled = true;
            videoRecorderSettings.EncoderSettings = new CoreEncoderSettings();
            videoRecorderSettings.ImageInputSettings = renderTextureInput;
            videoRecorderSettings.OutputFile = videoFilePath;
            videoRecorderSettings.FrameRatePlayback = FrameRatePlayback.Constant;
            videoRecorderSettings.FrameRate = frameRate;
            videoRecorderSettings.CapFrameRate = true;

            var recorderControllerSettings = ScriptableObject.CreateInstance<RecorderControllerSettings>();
            recorderControllerSettings.AddRecorderSettings(videoRecorderSettings);
            recorderControllerSettings.SetRecordModeToManual();
            recorderControllerSettings.FrameRate = frameRate;
            recorderControllerSettings.CapFrameRate = true;

            recorderController = new RecorderController(recorderControllerSettings);
        }

        private void StartRecording()
        {
            SetupRecorder();
            
            isRecording = true;

            // Ensure the temporary RenderTexture is available
            if (temporaryRenderTexture == null)
            {
                temporaryRenderTexture = new RenderTexture(width, height, 24);
                cameraComponent.targetTexture = temporaryRenderTexture;
            }

            recorderController.PrepareRecording();
            recorderController.StartRecording();

            if (recordAudio)
            {
                audioRecorder.StartRecording();
            }
        }

        private void StopRecording()
        {
            isRecording = false;
            if (recordAudio)
            {
                audioRecorder.StopRecording();
            }
            recorderController.StopRecording();

            StartCoroutine(SaveAudio());

            // Clean up the temporary RenderTexture
            if (temporaryRenderTexture != null)
            {
                cameraComponent.targetTexture = null;
                Destroy(temporaryRenderTexture);
                temporaryRenderTexture = null;
            }

            if (!recordAudio) return;
            StartCoroutine(WaitForRecordingToFinish());
        }
        
        private IEnumerator WaitForRecordingToFinish()
        {
            yield return new WaitForSeconds(waitBeforeTryToMerge);

            //We do not add extension too path initially due Recorder structure.
            //It does it inside Recorder. But we need that after recording is done.
            videoFilePath += ".mp4";
            
            if (File.Exists(videoFilePath))
            {
                Debug.Log("Recording finished. Starting to combine video and audio.");
                CombineVideoAndAudio();
            }
            else
            {
                Debug.LogError("Recorded video file was not found after it was recorded.");
            }
        }

        private IEnumerator SaveAudio()
        {
            if (recordAudio)
            {
                audioRecorder.SaveAudioToWav(audioFilePath);
                yield return null;

                Debug.Log("Audio saved to " + audioFilePath);
            }

            Debug.Log("Recording completed.");
        }

        private void CombineVideoAndAudio()
        {
            var combinedFilePath = Path.Combine(Path.GetDirectoryName(videoFilePath),
                Path.GetFileNameWithoutExtension(videoFilePath) + "_Combined" + Path.GetExtension(videoFilePath)
            );
            
            // Build the FFmpeg command
            var ffmpegCommand = $"-i \"{videoFilePath}\" -i \"{audioFilePath}\" -c:v copy -c:a aac -strict experimental \"{combinedFilePath}\"";

            var process = new Process
            {
                StartInfo = new ProcessStartInfo
                {
                    FileName = pathToFFMPEG,  // Ensure ffmpeg is accessible from PATH
                    Arguments = ffmpegCommand,
                    RedirectStandardOutput = true,
                    RedirectStandardError = true,
                    UseShellExecute = false,
                    CreateNoWindow = true
                }
            };

            try
            {
                process.Start();

                string output = process.StandardOutput.ReadToEnd();
                string error = process.StandardError.ReadToEnd();

                process.WaitForExit();

                if (process.ExitCode == 0)
                {
                    Debug.Log("Combined video and audio saved to " + combinedFilePath);
                    if (deleteSourceFiles)
                    {
                        DeleteSourceFiles();
                    }
                }
                else
                {
                    Debug.LogError("FFmpeg failed with exit code " + process.ExitCode);
                    Debug.LogError("FFmpeg Output: " + output);
                    Debug.LogError("FFmpeg Errors: " + error);
                }
            }
            catch (Exception ex)
            {
                Debug.LogError("An error occurred while combining video and audio: " + ex.Message);
            }
        }

        private void DeleteSourceFiles()
        {
            try
            {
                if (File.Exists(videoFilePath))
                {
                    File.Delete(videoFilePath);
                    Debug.Log("Deleted video file: " + videoFilePath);
                }

                if (File.Exists(audioFilePath))
                {
                    File.Delete(audioFilePath);
                    Debug.Log("Deleted audio file: " + audioFilePath);
                }
            }
            catch (Exception ex)
            {
                Debug.LogError("An error occurred while deleting source files: " + ex.Message);
            }
        }
    }
}

AudioRecorder class

using System;
using System.Collections.Generic;
using System.IO;
using System.Runtime.InteropServices;
using FMODUnity;

namespace Recorder
{
    public class AudioRecorder
    {
        private FMOD.DSP mDSP;
        private FMOD.ChannelGroup mCg;
        private readonly List<float> mAudioData;
        private readonly int mSampleRate;
        private int mNumChannels;
        private FMOD.DSP_DESCRIPTION mDSPDescription;

        public AudioRecorder()
        {
            mAudioData = new List<float>();
            RuntimeManager.CoreSystem.getSoftwareFormat(out mSampleRate, out _, out _);

            var mObjHandle = GCHandle.Alloc(this, GCHandleType.Pinned);
            mDSPDescription = new FMOD.DSP_DESCRIPTION
            {
                numinputbuffers = 1,
                numoutputbuffers = 1,
                read = CaptureDSPReadCallback,
                userdata = GCHandle.ToIntPtr(mObjHandle)
            };
        }

        public void StartRecording()
        {
            mAudioData.Clear();

            var bus = RuntimeManager.GetBus("bus:/");
            if (bus.getChannelGroup(out mCg) != FMOD.RESULT.OK) return;
            RuntimeManager.CoreSystem.createDSP(ref mDSPDescription, out mDSP);
            mCg.addDSP(0, mDSP);
        }

        public void StopRecording()
        {
            if (!mDSP.hasHandle()) return;
            mCg.removeDSP(mDSP);
            mDSP.release();
        }

        public void SaveAudioToWav(string filePath)
        {
            using var fs = File.Create(filePath);
            using var bw = new BinaryWriter(fs);
            WriteWavHeader(bw, mAudioData.Count);
            var bytes = new byte[mAudioData.Count * 4];
            Buffer.BlockCopy(mAudioData.ToArray(), 0, bytes, 0, bytes.Length);
            fs.Write(bytes, 0, bytes.Length);
        }

        [AOT.MonoPInvokeCallback(typeof(FMOD.DSP_READ_CALLBACK))]
        private static FMOD.RESULT CaptureDSPReadCallback(ref FMOD.DSP_STATE dspState, IntPtr inBuffer, IntPtr outBuffer, uint length, int inChannels, ref int outChannels)
        {
            var lengthElements = (int)length * inChannels;
            var data = new float[lengthElements];
            Marshal.Copy(inBuffer, data, 0, lengthElements);

            var functions = (FMOD.DSP_STATE_FUNCTIONS)Marshal.PtrToStructure(dspState.functions, typeof(FMOD.DSP_STATE_FUNCTIONS));
            functions.getuserdata(ref dspState, out var userData);

            if (userData != IntPtr.Zero)
            {
                var objHandle = GCHandle.FromIntPtr(userData);

                if (objHandle.Target is AudioRecorder { mAudioData: { } } obj)
                {
                    obj.mNumChannels = inChannels;
                    obj.mAudioData.AddRange(data);
                }
            }

            Marshal.Copy(data, 0, outBuffer, lengthElements);

            return FMOD.RESULT.OK;
        }

        private void WriteWavHeader(BinaryWriter bw, int length)
        {
            bw.Seek(0, SeekOrigin.Begin);

            bw.Write(System.Text.Encoding.ASCII.GetBytes("RIFF"));
            bw.Write(32 + length * 4 - 8);
            bw.Write(System.Text.Encoding.ASCII.GetBytes("WAVEfmt "));
            bw.Write(16);
            bw.Write((short)3);
            bw.Write((short)mNumChannels);
            bw.Write(mSampleRate);
            bw.Write(mSampleRate * 32 / 8 * mNumChannels);
            bw.Write((short)(32 / 8 * mNumChannels));
            bw.Write((short)32);
            bw.Write(System.Text.Encoding.ASCII.GetBytes("data"));
            bw.Write(length * 4);
        }
    }
}

2 Likes

This solution on paper seems the most feasible to keep using the Unity Recorder package and having FMOD audio but tried this script (Unity v2022.3) and when active in play mode I just hear some high pitch clicks and nothing more. Destroying the object reverts the audio to normal.

I think I did everything specified in the comments (Enabling Unity Audio, Audio Source in this script GameObject, AudioListener in a separate GameObject)

The Log information:

  • Call to FMOD API with result OK
  • [FMOD] OutputWASAPI::mixerThread : Starvation detected in WASAPI output buffer!

I’ve checked the sample rates (Unity & FMOD) and they match 48000.
I’ve also tried increasing the DSP buffer size with

        FMODUnity.RuntimeManager.CoreSystem.setDSPBufferSize(1024, 4); 

but still nothing seems to work

Can’t seem to figure out the problem, could you provide some guidance?

It’s likely that there’s a mismatch between the number of channels being used by FMOD at the point the capture DSP is inserted into the DSP graph compared to the Audio Filter being used in Unity’s audio system. There’s a number of ways to go about solving this depending on whether the speaker modes are conforming to their respective system speaker modes (i.e. Unity’s speaker mode in Audio preferences, FMOD’s speaker mode for the target platform), or are instead set to specific speaker modes irrespective of this, but fundamentally they need to match or you’ll end up with artifacting like the high pitched clicking you’re experiencing.

Since the empty AudioSource will match Unity’s default speaker mode at Edit → Project Settings → Audio → Default Speaker Mode, you can do one of the following:

  • Change the Unity setting to match the number of channels you’re expecting from FMOD
  • Have the FMOD DSP downmix to match the Unity setting

For the latter, you should be able to grab Unity’s speaker mode using AudioSettings.GetConfiguration().speakerMode, and then change the FMOD DSP’s speaker mode by using DSP::setChannelFormat and setting source_speakermode to match it.