Found the solution myself, but I think I should show you all an example of how a clear answer should look like, instead of just telling one he has to change the AVAudioSession category but not telling him how he is supposed to do it.
So in order to record on iOS devices, after you create a build, which is an xCode project, because that is how Unity is exporting to iOS, you don’t get an ipa file but an xCode project you copy to a mac and open in xCode. So before you open the project in xCode, you go to the Classes folder in this project, there you can see a file by the name of: UnityAppController.mm
open it you will see a function startUnity, in that function after this line of code: UnitySetPlayerFocus(1);
you add this line:
NSAssert(_unityAppReady == NO, @"[UnityAppController startUnity:] called after Unity has been initialized");
// we make sure that first level gets correct display list and orientation
[[DisplayManager Instance] updateDisplayListCacheInUnity];
AVAudioSession* audioSession = [AVAudioSession sharedInstance];
[audioSession setActive: (UnityShouldActivateAVAudioSession() == 1) error: nil];
[audioSession addObserver: self forKeyPath: @"outputVolume" options: 0 context: nil];
UnityUpdateMuteState([audioSession outputVolume] < 0.01f ? 1 : 0
See, that is an answer worthy of its name, wasn’t too hard was it?