Your only other option is to directly interact with the Core API, which means you lose out on all Studio functionality.
There isn’t a way to “fix” the sound to a programmer instrument - essentially, once the programmer instrument untriggers it loses its association with the sound. However, you should be able to create a sound a single time in your own code, and then pass it into the event callback using user data so you can get the programmer instrument to play it.
Take a look at this reply for an example on how you might do so: Adding effects to real-time recorded audio - #2 by Connor_FMOD
No, this isn’t an option.
A few options come to mind, such as ffmpeg, Audacity, and fre:ac, but there may be other options out there that suit your needs better.