Hi Connor!
Thanks for getting back to me again.   I’ve begun the process for registering a new project and am currently awaiting approval.   I’ve also made a stripped down version of the project so will send it your way once I’ve been cleared.
I tried the mixerSuspend followed by the flush command step. The program goes through the OnApplicationPause step without issue, but breaks a frame later (just before the oculus device goes idle). If I add the recordingStop step anywhere during this ProcessDeviceState step things break as soon as that method is hit - if i don’t call it, things break a frame later.  A frame later I get the null pointer dereference error;
2024/04/24 15:12:08.912 27277 27297 Error CRASH pid: 27277, tid: 27297, name: UnityMain  >>> com.ME.singing <<<
2024/04/24 15:12:08.912 27277 27297 Error CRASH uid: 10116
2024/04/24 15:12:08.912 27277 27297 Error CRASH signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr --------
2024/04/24 15:12:08.912 27277 27297 Error CRASH Cause: null pointer dereference
If i call recording.stop during this on application suspend
This is the dsp playback code (ie the code that routes the players audio back out to the speakers)
    void CreateSoundStartRecording()
        {
            MODE mode = MODE.OPENUSER | MODE.LOOP_NORMAL;
            RuntimeManager.CoreSystem.createSound("test", mode, ref _soundInfo, out _micSound);
            RuntimeManager.CoreSystem.recordStart(RecordingDeviceIndex, _micSound, true);
            _micSound.getLength(out uint length, TIMEUNIT.PCM);
            _pcmBuffer = new short[length];
            // Ading dsp capture
            // Assign the callback to a member variable to avoid garbage collection
            // Allocate a data buffer large enough for 8 channels
            uint bufferLength;
            int numBuffers;
            FMODUnity.RuntimeManager.CoreSystem.getDSPBufferSize(out bufferLength, out numBuffers);
            mDataBuffer = new float[bufferLength * 8];
            mBufferLength = bufferLength;
        
            // Get a handle to this object to pass into the callback
            mReadCallback = CaptureDSPReadCallback;
            mObjHandle = GCHandle.Alloc(this);
            if (mObjHandle != null)
            {
                // Define a basic DSP that receives a callback each mix to capture audio
                FMOD.DSP_DESCRIPTION desc = new FMOD.DSP_DESCRIPTION();
                desc.numinputbuffers = 1;
                desc.numoutputbuffers = 1;
                desc.read = mReadCallback;
                desc.userdata = GCHandle.ToIntPtr(mObjHandle);
                testSet = desc;
                if (FMODUnity.RuntimeManager.CoreSystem.createDSP(ref desc, out mCaptureDSP) == FMOD.RESULT.OK)
                {
                    if (_channelGroup.addDSP(0, mCaptureDSP) != FMOD.RESULT.OK)
                    {
                        Debug.LogWarningFormat("FMOD: Unable to add mCaptureDSP to the master channel group");
                    }
                }
                else
                {
                    Debug.LogWarningFormat("FMOD: Unable to create a DSP: mCaptureDSP");
                }
            }
            else
            {
                Debug.LogWarningFormat("FMOD: Unable to create a GCHandle: mObjHandle");
            }
            processInputSequence = StartCoroutine(ProcessInput());
        }
Which becomes tied (and performs a callback to this one…)
		[AOT.MonoPInvokeCallback(typeof(FMOD.DSP_READ_CALLBACK))]
		static FMOD.RESULT CaptureDSPReadCallback(ref FMOD.DSP_STATE dsp_state, IntPtr inbuffer, IntPtr outbuffer, uint length, int inchannels, ref int outchannels)
		{
            FMOD.DSP_STATE_FUNCTIONS functions = (FMOD.DSP_STATE_FUNCTIONS)Marshal.PtrToStructure(dsp_state.functions, typeof(FMOD.DSP_STATE_FUNCTIONS));
			IntPtr userData;
			functions.getuserdata(ref dsp_state, out userData);
			GCHandle objHandle = GCHandle.FromIntPtr(userData);
			RecordMic obj = objHandle.Target as RecordMic;
			// Save the channel count out for the update function
			obj.mChannels = inchannels;
			// Copy the incoming buffer to process later
			int lengthElements = (int)length * inchannels;
			Marshal.Copy(inbuffer, obj.mDataBuffer, 0, lengthElements);
			// Copy the inbuffer to the outbuffer so we can still hear it
			Marshal.Copy(obj.mDataBuffer, 0, outbuffer, lengthElements);
			return FMOD.RESULT.OK;
		}
So i’m not sure. It feels like theres a callback that during my shutdown sequence ends up trying to do something that’s already been packed up ie. FMOD has already begun shutting down processes just a moment before my OnApplicationPause gets hit - so calling StopRecording immediately gets you into a bad place (because the recording stuff has been packed away).
But I could be off in my analysis. If i remove some of these steps, the dsp, the playback or the recording, it can handle the OnApplicationPause step without breaking - it’s the combination that seems to get things upset.
Just to outline it all really clearly - this is the whole class. The process device state method has gone through a billion iterations of what it does - so it may look slightly different to the above code.
using System;
using System.Runtime.InteropServices;
using FMOD;
using UnityEngine.Serialization;
namespace Audio
{
    using FMODUnity;
    using UnityEngine;
    using System.Collections;
    using FMOD.Studio;
    using Sirenix.OdinInspector;
    public class RecordMic : MonoBehaviour
    {
        //public variables
        [Header("Choose A Microphone")]
        public int RecordingDeviceIndex = 0;
		[TextArea]
        public string RecordingDeviceName = null;
        public float Latency = .05f;
        //FMOD Objects
        private Sound _micSound;
        private CREATESOUNDEXINFO _soundInfo;
        private Channel _channel;
        private ChannelGroup _channelGroup;
        private int numOfDriversConnected = 0;
        private int numofDrivers = 0;
        private Guid MicGUID;
        private int SampleRate = 0;
        private SPEAKERMODE FMODSpeakerMode;
        private int NumOfChannels = 0;
        public const int NumBytesPerSample = 2;
        public const int NumInputChannels = 1;
        private DRIVER_STATE driverState;
        const float WIDTH = 0.01f;
        const float HEIGHT = 10.0f;
        const float YOFFSET = 5.0f;
        public static float micVolume;
        protected short[] _pcmBuffer;
        protected int _bufferPos;
        private int _latencySamples;
        private DSP _reverbDSP;
        [FormerlySerializedAs("_micSfxEvent")]
        [Header("Input Monitoring")]
        [SerializeField]
        [Tooltip("Path to the event for input monitoring")]
        EventReference micSfxEvent;
        Coroutine processInputSequence = null;
        EventInstance micInstance;
        private bool isRecordingActive = false;
        private bool isApplicationPaused;
        private bool isApplicationFocused;
        void Start()
        {
            Initialize();
            CreateSoundStartRecording();
        }
        void Initialize()
        {
            RuntimeManager.CoreSystem.createChannelGroup("Microphone ChannelGroup", out _channelGroup);
            LogMicrophoneAvailability();
        
            ConfigureSoundSettings();
            ConfigureReverb();
    
        }
        void LogMicrophoneAvailability()
        {
            RuntimeManager.CoreSystem.getRecordNumDrivers(out numofDrivers, out numOfDriversConnected);
            Debug.Log(numOfDriversConnected == 0 ? "Plug in a Microphone!!!" : $"You have {numOfDriversConnected} microphones available to record with.");
        }
        void ConfigureSoundSettings()
        {
            RuntimeManager.CoreSystem.getRecordDriverInfo(RecordingDeviceIndex, out RecordingDeviceName, 50, out MicGUID, out SampleRate, out FMODSpeakerMode, out NumOfChannels, out driverState);
            _soundInfo = new CREATESOUNDEXINFO
            {
                cbsize = Marshal.SizeOf(typeof(CREATESOUNDEXINFO)),
                length = (uint)(SampleRate * sizeof(byte) * NumBytesPerSample * NumInputChannels),
                numchannels = NumInputChannels,
                channelorder = CHANNELORDER.ALLMONO,
                defaultfrequency = SampleRate,
                format = SOUND_FORMAT.PCM16,
                dlsname = IntPtr.Zero,
            };
            _latencySamples = (int)(SampleRate * Latency);
        }
        void ConfigureReverb()
        {
            REVERB_PROPERTIES reverbRoom = PRESET.ROOM();
            REVERB_PROPERTIES reverbOff = PRESET.OFF();
            REVERB_PROPERTIES reverbArena = PRESET.ARENA();
            //  RuntimeManager.CoreSystem.setReverbProperties(1, ref reverbArena);
            RuntimeManager.CoreSystem.createDSPByType(DSP_TYPE.SFXREVERB, out _reverbDSP);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.DECAYTIME, 2.0f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.EARLYDELAY, 0.1f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.LATEDELAY, 0.01f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.HFREFERENCE, 5000f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.HFDECAYRATIO, 50f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.DIFFUSION, 70f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.DENSITY, 70f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.LOWSHELFFREQUENCY, 200f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.LOWSHELFGAIN, 0f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.HIGHCUT, 10000f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.EARLYLATEMIX, 50f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.WETLEVEL, -6f);
            _reverbDSP.setParameterFloat((int)FMOD.DSP_SFXREVERB.DRYLEVEL, 0f);
            _channelGroup.addDSP(0, _reverbDSP);
        }
        FMOD.DSP_DESCRIPTION testSet;
        void CreateSoundStartRecording()
        {
            MODE mode = MODE.OPENUSER | MODE.LOOP_NORMAL;
            RuntimeManager.CoreSystem.createSound("test", mode, ref _soundInfo, out _micSound);
            RuntimeManager.CoreSystem.recordStart(RecordingDeviceIndex, _micSound, true);
            _micSound.getLength(out uint length, TIMEUNIT.PCM);
            _pcmBuffer = new short[length];
            // Ading dsp capture
            // Assign the callback to a member variable to avoid garbage collection
            // Allocate a data buffer large enough for 8 channels
            uint bufferLength;
            int numBuffers;
            FMODUnity.RuntimeManager.CoreSystem.getDSPBufferSize(out bufferLength, out numBuffers);
            mDataBuffer = new float[bufferLength * 8];
            mBufferLength = bufferLength;
        
            // Get a handle to this object to pass into the callback
            mReadCallback = CaptureDSPReadCallback;
            mObjHandle = GCHandle.Alloc(this);
            if (mObjHandle != null)
            {
                // Define a basic DSP that receives a callback each mix to capture audio
                FMOD.DSP_DESCRIPTION desc = new FMOD.DSP_DESCRIPTION();
                desc.numinputbuffers = 1;
                desc.numoutputbuffers = 1;
                desc.read = mReadCallback;
                desc.userdata = GCHandle.ToIntPtr(mObjHandle);
                testSet = desc;
                if (FMODUnity.RuntimeManager.CoreSystem.createDSP(ref desc, out mCaptureDSP) == FMOD.RESULT.OK)
                {
                    if (_channelGroup.addDSP(0, mCaptureDSP) != FMOD.RESULT.OK)
                    {
                        Debug.LogWarningFormat("FMOD: Unable to add mCaptureDSP to the master channel group");
                    }
                }
                else
                {
                    Debug.LogWarningFormat("FMOD: Unable to create a DSP: mCaptureDSP");
                }
            }
            else
            {
                Debug.LogWarningFormat("FMOD: Unable to create a GCHandle: mObjHandle");
            }
            processInputSequence = StartCoroutine(ProcessInput());
        }
        void OnApplicationPause(bool pauseStatus)
        {
           #if !UNITY_EDITOR
            isApplicationPaused = pauseStatus;
            ProcessDeviceState();
           #endif
        }
        void OnApplicationFocus(bool hasFocus)
        {
            #if !UNITY_EDITOR
            isApplicationFocused = hasFocus;
            ProcessDeviceState();
            #endif
        }
   
        void ProcessDeviceState()
        {
           
            if(isRecordingActive && !isApplicationFocused || isApplicationPaused)
            {
                Debug.Log("Pausing record");
                StopCoroutine(processInputSequence);
                testSet.read = null;
                isRecordingActive = false;
                _channel.stop();
                RuntimeManager.CoreSystem.update();
                RuntimeManager.CoreSystem.mixerSuspend();
                RuntimeManager.StudioSystem.flushCommands();
                RuntimeManager.WaitForAllSampleLoading();
                RuntimeManager.CoreSystem.recordStop(RecordingDeviceIndex);
                Debug.Log("All steps in pause record concluded.");
            }
            else if(!isRecordingActive && isApplicationFocused && !isApplicationPaused)
            {
                isRecordingActive = true;
                Debug.Log("Resuming record");
            }
        }
        void LateUpdate()
        {
            if (isRecordingActive)
            {
                FillPCMBuffer(out int numSamples);
                UpdateDSPViz();
            }
        }
		void UpdateDSPViz()
		{
            float frameVolume = 0f;
			// Do what you want with the captured data
			for (int j = 0; j < mBufferLength; j++)
			{
				for (int i = 0; i < mChannels; i++)
				{
                    frameVolume += Mathf.Abs(mDataBuffer[(j * mChannels) + i]);
					//float x = j * WIDTH;
					//float y = mDataBuffer[(j * mChannels) + i] * HEIGHT;
					// Make sure Gizmos is enabled in the Unity Editor to show debug line draw for the captured channel data
					//Debug.DrawLine(new Vector3(x, (YOFFSET * i) + y, 0), new Vector3(x, (YOFFSET * i) - y, 0), Color.green);
				}
			}
            micVolume = frameVolume / mDataBuffer.Length * HEIGHT;
		}
       
		IEnumerator ProcessInput()
        {
            isRecordingActive = true;
            micInstance = FMODUnity.RuntimeManager.CreateInstance(micSfxEvent);
			var createResult = micInstance.start();
            if (createResult != RESULT.OK) {
                Debug.LogError("[Fmod] Had an issue instantiating the input monitoring event.");
            }
            FMOD.Studio.PLAYBACK_STATE playbackState;
            var attemptCount = 0;
            do {
                attemptCount++;
                if (attemptCount > 50) {
                    Debug.LogError("AUDIO IO MANAGER: reached attempt limit preparing input monitoring event instance");
                    yield break;
                }
                createResult = micInstance.getPlaybackState(out playbackState);
                if (createResult != RESULT.OK) {
                    Debug.Log("Too MANY TRIES!!!");
                    yield break;
                }
                yield return new WaitForEndOfFrame();
            } while (playbackState != FMOD.Studio.PLAYBACK_STATE.PLAYING);
            var result = micInstance.getPlaybackState(out playbackState);
            
            RuntimeManager.CoreSystem.playSound(_micSound, _channelGroup, false, out _channel);
            while (isRecordingActive)
            {
                createResult = _channel.setPosition((uint)(int)Mathf.Repeat(_bufferPos - _latencySamples, _pcmBuffer.Length), TIMEUNIT.PCM);
                yield break;
            }
        }
        protected bool FillPCMBuffer(out int numSamples)
        {
            numSamples = 0;
            if (!isRecordingActive)
            {
                return false;
            }
            RESULT result = RuntimeManager.CoreSystem.getRecordPosition(RecordingDeviceIndex, out uint recPos);
            if (result != RESULT.OK) return false;
            if (recPos == _bufferPos) return true;
            numSamples = (int)Mathf.Repeat((int)recPos - _bufferPos, _pcmBuffer.Length);
            result = _micSound.@lock(
                (uint)_bufferPos * NumBytesPerSample,
                (uint)numSamples * NumBytesPerSample,
                out IntPtr ptr1, out IntPtr ptr2, out uint len1, out uint len2);
            if (result != RESULT.OK) return false;
            Marshal.Copy(ptr1, _pcmBuffer, _bufferPos, (int)len1 / NumBytesPerSample);
            if (len2 > 0) Marshal.Copy(ptr2, _pcmBuffer, 0, (int)len2 / NumBytesPerSample);
            result = _micSound.unlock(ptr1, ptr2, len1, len2);
            if (result != RESULT.OK) return false;
            _bufferPos = (int)recPos;
            return true;
        }
		private FMOD.DSP_READ_CALLBACK mReadCallback;
		private FMOD.DSP mCaptureDSP;
		private float[] mDataBuffer;
		private GCHandle mObjHandle;
		private uint mBufferLength;
		private int mChannels = 0;
		[AOT.MonoPInvokeCallback(typeof(FMOD.DSP_READ_CALLBACK))]
		static FMOD.RESULT CaptureDSPReadCallback(ref FMOD.DSP_STATE dsp_state, IntPtr inbuffer, IntPtr outbuffer, uint length, int inchannels, ref int outchannels)
		{
            FMOD.DSP_STATE_FUNCTIONS functions = (FMOD.DSP_STATE_FUNCTIONS)Marshal.PtrToStructure(dsp_state.functions, typeof(FMOD.DSP_STATE_FUNCTIONS));
			IntPtr userData;
			functions.getuserdata(ref dsp_state, out userData);
			GCHandle objHandle = GCHandle.FromIntPtr(userData);
			RecordMic obj = objHandle.Target as RecordMic;
			// Save the channel count out for the update function
			obj.mChannels = inchannels;
			// Copy the incoming buffer to process later
			int lengthElements = (int)length * inchannels;
			Marshal.Copy(inbuffer, obj.mDataBuffer, 0, lengthElements);
			// Copy the inbuffer to the outbuffer so we can still hear it
			Marshal.Copy(obj.mDataBuffer, 0, outbuffer, lengthElements);
			return FMOD.RESULT.OK;
		}
	}
}
Appreciate your time in looking at this. If theres anything else specific that I can provide please let me know.