I ran into a problem, in our game we use fmod to play sound, now there is a need to record a player’s battle and save it as a video. The plugins I used were ffmpeg and natcorder, but both of them had problems when dealing with the sound of fmod, and the sound of the recorded video was broken. All the data they need is the binary array of liner pcm, how can they get this data?
Hi,
There’s a couple of ways of doing this depending on what exactly you need:
FMOD_OUTPUTTYPE_WAVWRITER
allows the FMOD System to write its mixer output to a wav file. Note that using this means that the system will not output to your regular audio output device, which may make it unsuitable for your purposes.- We have a Unity DSP Capture scripting example that shows how to extract the raw PCM data from a given DSP in FMOD’s signal chain, in this case the Master Channel Group. Unlike
WAVWRITER
, this can be used even while outputting to audio devices.
I would recommend giving my reply in the following thread a read, as it broadly details how to do both things, including how to write the output of the DSP capture to wav if needed: Capturing Unity game’s FMOD output (5.1 / Busses) - #2 by Louis_FMOD
thanks for your replay.I tried the first method, saving it as a wav file and it successed,but fmod outputs it as a wav file when the RuntimeManager Destroyed, so I wrote a script to destroy the RuntimeManager when the screen recording stops.
public class RecFmodAudio : MonoBehaviour, IRecAudio
{
void Awake()
{
enabled = false;
}
public void StartRecording()
{
enabled = true;
startTime = Time.time;
Camera.main.gameObject.AddComponent<StudioListener>();
}
public void StopRecording(string savePath)
{
enabled = false;
var listner = Camera.main.gameObject.GetComponent<StudioListener>();
Destroy(listner);
var go = GameObject.Find("FMOD.UnityIntegration.RuntimeManager");
DestroyImmediate(go);
}
}
but I don’t want it to Destroy, I want to get the output of fmod in real time, is there any way? Looking at another example of ScriptUsageDspCapture you mentioned, how does CaptureDSPReadCallback get all of fmod’s output? Can you show the code? Thanks a million!!
ScriptUsageDspCapture
creates a custom DSP which is placed on the FMOD System’s Master Channel Group, with a custom static read callback CaptureDSPReadCallback()
. The read callback is called by the FMOD System with every mixer update and receives the audio signal that flows into the DSP, allowing you to directly access and/or modify the signal.
Additionally, you also pass the entire instance of the class ScriptUsageDspCapture
to the DSP as user data. The class instance is then accessed from the user data within the callback, and the incoming audio signal is copied to the class member mDataBuffer
. The data copied to mDataBuffer
within the callback is then processed in ScriptUsageDspCapture.Update()
.
In the case of the scripting example, the data is used in Update()
to draw the signal’s waveform, but for your purposes you’d instead want to accumulate all of the audio data and then presumably save it to file - that’s where the response linked in the other thread regarding writing to .wav comes in: record and output to .wav file - #2 by mathew
Yes, I have seen the c++ example, but what I didn’t understand is that in that example, I created a sound to record the sound of mirphone. What I need is to record the sound that has been recorded by the sound engineer and played by fmod. In this case, I don’t need to create a sound by myself, right? Therefore, I still do not understand how to record the output of fmod, I changed the sample code to this way, which is also not possible. So can you give me a code example?
//--------------------------------------------------------------------
//
// This is a Unity behaviour script that demonstrates how to capture
// the DSP with FMOD.DSP_READ_CALLBACK and how to create and add a DSP
// to master channel group ready for capturing.
//
// For the description of the channel counts. See
// https://fmod.com/docs/2.02/api/core-api-common.html#fmod_speakermode
//
// This document assumes familiarity with Unity scripting. See
// https://unity3d.com/learn/tutorials/topics/scripting for resources
// on learning Unity scripting.
//
//--------------------------------------------------------------------
using System;
using UnityEngine;
using System.Runtime.InteropServices;
using NatSuite;
using NatSuite.Recorders;
using NatSuite.Recorders.Clocks;
using FMODUnity;
using System.Collections.Generic;
using FMOD.Studio;
using System.Security.Cryptography;
public class ScriptUsageDspCapture : MonoBehaviour
{
private FMOD.DSP_READCALLBACK mReadCallback;
private FMOD.DSP mCaptureDSP;
private float mDataBuffer;
private GCHandle mObjHandle;
private uint mBufferLength;
private int mChannels = 0;
public MP4Recorder MP4Recorder;
public RealtimeClock RealtimeClock;
public bool StopRecord;
Dictionary<FMOD.ChannelGroup,FMOD.DSP> mDSPs;
[AOT.MonoPInvokeCallback(typeof(FMOD.DSP_READCALLBACK))]
static FMOD.RESULT CaptureDSPReadCallback(ref FMOD.DSP_STATE dsp_state, IntPtr inbuffer, IntPtr outbuffer, uint length, int inchannels, ref int outchannels)
{
FMOD.DSP_STATE_FUNCTIONS functions = (FMOD.DSP_STATE_FUNCTIONS)Marshal.PtrToStructure(dsp_state.functions, typeof(FMOD.DSP_STATE_FUNCTIONS));
IntPtr userData;
functions.getuserdata(ref dsp_state, out userData);
GCHandle objHandle = GCHandle.FromIntPtr(userData);
ScriptUsageDspCapture obj = objHandle.Target as ScriptUsageDspCapture;
// Save the channel count out for the update function
obj.mChannels = inchannels;
Debug.LogError($"channel {inchannels}");
// Copy the incoming buffer to process later
int lengthElements = (int)length * inchannels;
Marshal.Copy(inbuffer, obj.mDataBuffer, 0, lengthElements);
// Copy the inbuffer to the outbuffer so we can still hear it
Marshal.Copy(obj.mDataBuffer, 0, outbuffer, lengthElements);
return FMOD.RESULT.OK;
}
void Start()
{
mDSPs = new Dictionary<FMOD.ChannelGroup, FMOD.DSP>();
var res = RuntimeManager.StudioSystem.update();
var result = RuntimeManager.StudioSystem.flushCommands();
// Assign the callback to a member variable to avoid garbage collection
mReadCallback = CaptureDSPReadCallback;
// Allocate a data buffer large enough for 8 channels, pin the memory to avoid garbage collection
uint bufferLength;
int numBuffers;
FMODUnity.RuntimeManager.CoreSystem.getDSPBufferSize(out bufferLength, out numBuffers);
mDataBuffer = new float[bufferLength * 8];
mBufferLength = bufferLength;
// Get a handle to this object to pass into the callback
mObjHandle = GCHandle.Alloc(this);
if (mObjHandle != null)
{
// Define a basic DSP that receives a callback each mix to capture audio
FMOD.DSP_DESCRIPTION desc = new FMOD.DSP_DESCRIPTION();
desc.numinputbuffers = 1;
desc.numoutputbuffers = 1;
desc.read = mReadCallback;
desc.userdata = GCHandle.ToIntPtr(mObjHandle);
RuntimeManager.StudioSystem.getBankCount(out var numBanks);
RuntimeManager.StudioSystem.getBankList(out var banks);
for (int currentBank = 0; currentBank < numBanks; ++currentBank)
{
int numBusses = 0;
FMOD.Studio.Bus[] busses = null;
banks[currentBank].getBusCount(out numBusses);
banks[currentBank].getBusList(out busses);
for (int currentBus = 0; currentBus < numBusses; ++currentBus)
{
// Make sure the channel group of the current bus is assigned properly.
string busPath = null;
busses[currentBus].getPath(out busPath);
RuntimeManager.StudioSystem.getBus(busPath, out busses[currentBus]);
busses[currentBus].lockChannelGroup();
RuntimeManager.StudioSystem.flushCommands();
FMOD.ChannelGroup channelGroup;
busses[currentBus].getChannelGroup(out channelGroup);
RuntimeManager.CoreSystem.createDSP(ref desc, out var dsp);
mDSPs.Add(channelGroup, dsp);
channelGroup.addDSP(0, dsp);
busses[currentBus].unlockChannelGroup();
}
}
//// Create an instance of the capture DSP and attach it to the master channel group to capture all audio
//FMOD.ChannelGroup masterCG;
//if (FMODUnity.RuntimeManager.CoreSystem.getMasterChannelGroup(out masterCG) == FMOD.RESULT.OK)
//{
// if (FMODUnity.RuntimeManager.CoreSystem.createDSP(ref desc, out mCaptureDSP) == FMOD.RESULT.OK)
// {
// if (masterCG.addDSP(0, mCaptureDSP) != FMOD.RESULT.OK)
// {
// Debug.LogWarningFormat("FMOD: Unable to add mCaptureDSP to the master channel group");
// }
// }
// else
// {
// Debug.LogWarningFormat("FMOD: Unable to create a DSP: mCaptureDSP");
// }
//}
//else
//{
// Debug.LogWarningFormat("FMOD: Unable to create a master channel group: masterCG");
//}
}
else
{
Debug.LogWarningFormat("FMOD: Unable to create a GCHandle: mObjHandle");
}
}
void OnDestroy()
{
if (mObjHandle != null)
{
RuntimeManager.StudioSystem.getBankCount(out var numBanks);
RuntimeManager.StudioSystem.getBankList(out var banks);
for (int currentBank = 0; currentBank < numBanks; ++currentBank)
{
int numBusses = 0;
FMOD.Studio.Bus[] busses = null;
banks[currentBank].getBusCount(out numBusses);
banks[currentBank].getBusList(out busses);
for (int currentBus = 0; currentBus < numBusses; ++currentBus)
{
// Make sure the channel group of the current bus is assigned properly.
string busPath = null;
busses[currentBus].getPath(out busPath);
RuntimeManager.StudioSystem.getBus(busPath, out busses[currentBus]);
busses[currentBus].lockChannelGroup();
RuntimeManager.StudioSystem.flushCommands();
FMOD.ChannelGroup channelGroup;
busses[currentBus].getChannelGroup(out channelGroup);
if(mDSPs.ContainsKey(channelGroup))
channelGroup.removeDSP(mDSPs[channelGroup]);
busses[currentBus].unlockChannelGroup();
}
}
mObjHandle.Free();
}
}
const float WIDTH = 10.0f;
const float HEIGHT = 1.0f;
void Update()
{
// Do what you want with the captured data
if (mChannels != 0)
{
if (StopRecord == false)
MP4Recorder.CommitSamples(mDataBuffer, RealtimeClock.timestamp);
float yOffset = 5.7f;
for (int j = 0; j < mChannels; j++)
{
var pos = Vector3.zero;
pos.x = WIDTH * -0.5f;
for (int i = 0; i < mBufferLength; ++i)
{
pos.x += (WIDTH / mBufferLength);
pos.y = mDataBuffer[i + j * mBufferLength] * HEIGHT;
// Make sure Gizmos is enabled in the Unity Editor to show debug line draw for the captured channel data
Debug.DrawLine(new Vector3(pos.x, yOffset + pos.y, 0), new Vector3(pos.x, yOffset - pos.y, 0), Color.green);
}
yOffset -= 1.9f;
}
}
}
}
Different from the example code, I get all the banks here, and then get the bus inside each bank, take the channel group for each bus, and add dsp callback.
You don’t need to create the sound yourself in this case. The DSP Capture example gives you access to the audio data FMOD is currently playing. Instead of only using the current audio buffer, which is what the DSP Capture example does, you need to store all of audio data and then write it to file when playback has finished.
The example shows how to record from a microphone and write the microphone audio to a file. I linked it because specifically because it demonstrates how to write audio sample data to a WAV file - if you already have a way of writing your audio data to file, you can use that instead.
Thanks,I Choose the outputtype to WavWriter,then use ffmpeg to combine pics and voice to a video.
here, I have a problem, the video I recorded is only 1 minute 12 seconds, but the length of the output audio file is 2 minutes 0 seconds, why these two lengths are not the same? My approach is to set its output mode to WavWriter, and then destroy the RuntimeManager when stopping the screen recording.
So I want to know what factors affect the length of the output audio file, and how to make the length of the audio file and the length of the video file consistent, so that the sound and the picture are synchronized.
The only factor that should affect the length of the output audio file is how long the FMOD System exists after initialization. The most likely thing that would case a 48 second gap in lengths between the recorded audio and video is that the Unity game, and by extension the FMOD System, was playing for 48 seconds before you started video recording. You can confirm this by attempting to match your video to the relevant portions of the audio file - if they line up but there’s an excess of audio at the start of the file, then this is what has happened.
If this isn’t the case, and the audio file appears to be stretched in some way, can I get you to confirm your FMOD for Unity version number so I can try to reproduce the issue? Your previous posts indicate that you’re using 2.02.07, but just to be sure.
yes, version is 2.02.07,and unity version is 2021.3.19
I recorded another video. At the beginning, it could correspond with the picture, and then it got worse and worse. For a Time I output with Time. RealtimeSinceStartup. The initial value is 86 and the end value is 173, which should be 83 seconds, but the final audio output is 2 minutes and 31 seconds long.
I think of a possibility, is the time system of fmod consistent with the time system of unity? Could it be due to this inconsistency? The way I make videos is to take screenshots at regular intervals, save them as pictures, and then use ffmpeg to synthesize the video. The duration of the video is obtained by recording a start time using time.time and subtracting the start time from time.time when the recording stops.
Thanks for pointing out the issue - I’ve been able to reproduce it on my end and I’ve passed it along to the development team for further investigation. As a workaround, I would recommend either manually trimming the written wav file yourself, or using the DSP capture example I mentioned at the top of the thread (though you will need to handle writing to file yourself with in that case).
Thank you for your reply, but I don’t know how to capture the sound of all channels with DSP capture example, so I decided to remove the sound first.
If you mean FMOD Channel objects, then you just need to add the custom DSP on whatever Channels/ChannelGroups you want the audio from. An easy way to handle this is by using Buses, since you can retrieve their underlying ChannelGroups to place the DSP on with Studio::Bus::getChannelGroup
.
If you mean the actual audio channels in the output signal, the audio signal is interleaved, meaning each “sample” contains one value for each channel in the audio signal. For more info, you can read over the Sample Data section of the FMOD Engine Glossary.
That said, if trimming the WAV yourself is adequate for your use-case, there’s no reason not to do that instead.
Here is script that I use for my proj. I was stumbling into same issue and had to develop custom solution to be able use Unity Recorder and record and record Unity Audio.
Attach Camera Recorder to camera which you wanna use to be recorded. Note that it must NOT be you gameplay camera, create a new one and place it where you want.
System requires Unity Recorder to be installed. Also it requires FFMPEG to be intsalled and you will need to set path to it in the inspector. FFMPEG is used to combine recorded result.
Component separately records Video and Audio from FMOD, and then combines them into one file. You can choose whether to delete or keep original files in case you want to compose it manual.
Files are stored in Proj/Recordings
Note: Since Unity Recorder is Editor only tool, this approach wont work in build. You will also need to enable Unity Audio for propper use of Unity Recorder (typicaly disabled with FMOD)
CameraRecorder classs
using System;
using UnityEngine;
using UnityEngine.InputSystem;
using System.Collections;
using System.Diagnostics;
using System.IO;
using UnityEditor.Recorder;
using UnityEditor.Recorder.Encoder;
using UnityEditor.Recorder.Input;
using Debug = UnityEngine.Debug;
namespace Recorder
{
[RequireComponent(typeof(Camera))] // Ensure a Camera component is attached
public class CameraRecorder : MonoBehaviour
{
[SerializeField] private int width = 1920;
[SerializeField] private int height = 1080;
[SerializeField] private float frameRate = 60; // Set target FPS to 60
[SerializeField] private string outputFileName = "RecordedVideo";
[SerializeField] private InputActionReference recordingAction;
[SerializeField] private bool recordAudio = true;
[SerializeField] private bool deleteSourceFiles = false;
[SerializeField] private float waitBeforeTryToMerge = 2.0f;
[SerializeField] private string pathToFFMPEG = "/opt/homebrew/bin/ffmpeg";
private Camera cameraComponent;
private RenderTexture temporaryRenderTexture;
private RecorderController recorderController;
private AudioRecorder audioRecorder;
private bool isRecording = false;
private string outputDirectory;
private string videoFilePath;
private string audioFilePath;
private void Start()
{
cameraComponent = GetComponent<Camera>();
if (recordAudio)
{
audioRecorder = new AudioRecorder();
}
recordingAction.action.Enable();
recordingAction.action.performed += context =>
{
if (isRecording)
{
StopRecording();
}
else
{
StartRecording();
}
};
}
private void SetupRecorder()
{
var fullName = Directory.GetParent(Application.dataPath)?.FullName;
if (fullName != null)
outputDirectory = Path.Combine(fullName, "Recordings");
Directory.CreateDirectory(outputDirectory);
var dateTimeSuffix = DateTime.Now.ToString("MM.dd.HH.mm.ss");
var outputFileCombined = $"{outputFileName}_{dateTimeSuffix}";
videoFilePath = Path.Combine(outputDirectory, outputFileCombined);
audioFilePath = Path.ChangeExtension(videoFilePath, ".wav");
// Initialize the temporary RenderTexture
temporaryRenderTexture = new RenderTexture(width, height, 24);
temporaryRenderTexture.format = RenderTextureFormat.ARGB32;
cameraComponent.targetTexture = temporaryRenderTexture;
var renderTextureInput = new RenderTextureInputSettings
{
RenderTexture = temporaryRenderTexture
};
var videoRecorderSettings = ScriptableObject.CreateInstance<MovieRecorderSettings>();
videoRecorderSettings.name = "CustomVideoRecorder";
videoRecorderSettings.Enabled = true;
videoRecorderSettings.EncoderSettings = new CoreEncoderSettings();
videoRecorderSettings.ImageInputSettings = renderTextureInput;
videoRecorderSettings.OutputFile = videoFilePath;
videoRecorderSettings.FrameRatePlayback = FrameRatePlayback.Constant;
videoRecorderSettings.FrameRate = frameRate;
videoRecorderSettings.CapFrameRate = true;
var recorderControllerSettings = ScriptableObject.CreateInstance<RecorderControllerSettings>();
recorderControllerSettings.AddRecorderSettings(videoRecorderSettings);
recorderControllerSettings.SetRecordModeToManual();
recorderControllerSettings.FrameRate = frameRate;
recorderControllerSettings.CapFrameRate = true;
recorderController = new RecorderController(recorderControllerSettings);
}
private void StartRecording()
{
SetupRecorder();
isRecording = true;
// Ensure the temporary RenderTexture is available
if (temporaryRenderTexture == null)
{
temporaryRenderTexture = new RenderTexture(width, height, 24);
cameraComponent.targetTexture = temporaryRenderTexture;
}
recorderController.PrepareRecording();
recorderController.StartRecording();
if (recordAudio)
{
audioRecorder.StartRecording();
}
}
private void StopRecording()
{
isRecording = false;
if (recordAudio)
{
audioRecorder.StopRecording();
}
recorderController.StopRecording();
StartCoroutine(SaveAudio());
// Clean up the temporary RenderTexture
if (temporaryRenderTexture != null)
{
cameraComponent.targetTexture = null;
Destroy(temporaryRenderTexture);
temporaryRenderTexture = null;
}
if (!recordAudio) return;
StartCoroutine(WaitForRecordingToFinish());
}
private IEnumerator WaitForRecordingToFinish()
{
yield return new WaitForSeconds(waitBeforeTryToMerge);
//We do not add extension too path initially due Recorder structure.
//It does it inside Recorder. But we need that after recording is done.
videoFilePath += ".mp4";
if (File.Exists(videoFilePath))
{
Debug.Log("Recording finished. Starting to combine video and audio.");
CombineVideoAndAudio();
}
else
{
Debug.LogError("Recorded video file was not found after it was recorded.");
}
}
private IEnumerator SaveAudio()
{
if (recordAudio)
{
audioRecorder.SaveAudioToWav(audioFilePath);
yield return null;
Debug.Log("Audio saved to " + audioFilePath);
}
Debug.Log("Recording completed.");
}
private void CombineVideoAndAudio()
{
var combinedFilePath = Path.Combine(Path.GetDirectoryName(videoFilePath),
Path.GetFileNameWithoutExtension(videoFilePath) + "_Combined" + Path.GetExtension(videoFilePath)
);
// Build the FFmpeg command
var ffmpegCommand = $"-i \"{videoFilePath}\" -i \"{audioFilePath}\" -c:v copy -c:a aac -strict experimental \"{combinedFilePath}\"";
var process = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = pathToFFMPEG, // Ensure ffmpeg is accessible from PATH
Arguments = ffmpegCommand,
RedirectStandardOutput = true,
RedirectStandardError = true,
UseShellExecute = false,
CreateNoWindow = true
}
};
try
{
process.Start();
string output = process.StandardOutput.ReadToEnd();
string error = process.StandardError.ReadToEnd();
process.WaitForExit();
if (process.ExitCode == 0)
{
Debug.Log("Combined video and audio saved to " + combinedFilePath);
if (deleteSourceFiles)
{
DeleteSourceFiles();
}
}
else
{
Debug.LogError("FFmpeg failed with exit code " + process.ExitCode);
Debug.LogError("FFmpeg Output: " + output);
Debug.LogError("FFmpeg Errors: " + error);
}
}
catch (Exception ex)
{
Debug.LogError("An error occurred while combining video and audio: " + ex.Message);
}
}
private void DeleteSourceFiles()
{
try
{
if (File.Exists(videoFilePath))
{
File.Delete(videoFilePath);
Debug.Log("Deleted video file: " + videoFilePath);
}
if (File.Exists(audioFilePath))
{
File.Delete(audioFilePath);
Debug.Log("Deleted audio file: " + audioFilePath);
}
}
catch (Exception ex)
{
Debug.LogError("An error occurred while deleting source files: " + ex.Message);
}
}
}
}
AudioRecorder class
using System;
using System.Collections.Generic;
using System.IO;
using System.Runtime.InteropServices;
using FMODUnity;
namespace Recorder
{
public class AudioRecorder
{
private FMOD.DSP mDSP;
private FMOD.ChannelGroup mCg;
private readonly List<float> mAudioData;
private readonly int mSampleRate;
private int mNumChannels;
private FMOD.DSP_DESCRIPTION mDSPDescription;
public AudioRecorder()
{
mAudioData = new List<float>();
RuntimeManager.CoreSystem.getSoftwareFormat(out mSampleRate, out _, out _);
var mObjHandle = GCHandle.Alloc(this, GCHandleType.Pinned);
mDSPDescription = new FMOD.DSP_DESCRIPTION
{
numinputbuffers = 1,
numoutputbuffers = 1,
read = CaptureDSPReadCallback,
userdata = GCHandle.ToIntPtr(mObjHandle)
};
}
public void StartRecording()
{
mAudioData.Clear();
var bus = RuntimeManager.GetBus("bus:/");
if (bus.getChannelGroup(out mCg) != FMOD.RESULT.OK) return;
RuntimeManager.CoreSystem.createDSP(ref mDSPDescription, out mDSP);
mCg.addDSP(0, mDSP);
}
public void StopRecording()
{
if (!mDSP.hasHandle()) return;
mCg.removeDSP(mDSP);
mDSP.release();
}
public void SaveAudioToWav(string filePath)
{
using var fs = File.Create(filePath);
using var bw = new BinaryWriter(fs);
WriteWavHeader(bw, mAudioData.Count);
var bytes = new byte[mAudioData.Count * 4];
Buffer.BlockCopy(mAudioData.ToArray(), 0, bytes, 0, bytes.Length);
fs.Write(bytes, 0, bytes.Length);
}
[AOT.MonoPInvokeCallback(typeof(FMOD.DSP_READ_CALLBACK))]
private static FMOD.RESULT CaptureDSPReadCallback(ref FMOD.DSP_STATE dspState, IntPtr inBuffer, IntPtr outBuffer, uint length, int inChannels, ref int outChannels)
{
var lengthElements = (int)length * inChannels;
var data = new float[lengthElements];
Marshal.Copy(inBuffer, data, 0, lengthElements);
var functions = (FMOD.DSP_STATE_FUNCTIONS)Marshal.PtrToStructure(dspState.functions, typeof(FMOD.DSP_STATE_FUNCTIONS));
functions.getuserdata(ref dspState, out var userData);
if (userData != IntPtr.Zero)
{
var objHandle = GCHandle.FromIntPtr(userData);
if (objHandle.Target is AudioRecorder { mAudioData: { } } obj)
{
obj.mNumChannels = inChannels;
obj.mAudioData.AddRange(data);
}
}
Marshal.Copy(data, 0, outBuffer, lengthElements);
return FMOD.RESULT.OK;
}
private void WriteWavHeader(BinaryWriter bw, int length)
{
bw.Seek(0, SeekOrigin.Begin);
bw.Write(System.Text.Encoding.ASCII.GetBytes("RIFF"));
bw.Write(32 + length * 4 - 8);
bw.Write(System.Text.Encoding.ASCII.GetBytes("WAVEfmt "));
bw.Write(16);
bw.Write((short)3);
bw.Write((short)mNumChannels);
bw.Write(mSampleRate);
bw.Write(mSampleRate * 32 / 8 * mNumChannels);
bw.Write((short)(32 / 8 * mNumChannels));
bw.Write((short)32);
bw.Write(System.Text.Encoding.ASCII.GetBytes("data"));
bw.Write(length * 4);
}
}
}