We’re setting up some dialogue conversations using the Dialogue Wave and Voice components in UE4, but unfortunately they will only accept UE4 Sound Waves, not FMOD Events. Obviously we would like to keep all of our dialogue audio samples in FMOD for mix purposes, but want to make use of some of the new dialogue and localization feature coming in soon in Unreal. Is this functionality that will be fixed in a future release of FMOD, or is this something that would need to be provided by Epic?
As you have noticed, the inbuilt UE4 dialog/voice components work separately to the FMOD audio components.
You can either run the UE4 audio side-by-side with FMOD, or else use FMOD events for dialogue instead.
If you have lots of dialogue, then creating an FMOD event for each one may start getting unwieldy. There are solutions to that with programmer sounds and the FMOD audio table, but that does require some C++ coding since we don’t expose programmer sounds to blueprint.
The issue is, in order to display subtitles, we need to utilize the Dialogue Wave and Dialogue Voice components in UE4 (which are not compatible with FMOD) Unless there’s another way maybe we’re not seeing?..
Hi Geoff, just to follow up on this, would there be any way in the future to wrap the FMOD programmer sounds in an Unreal voice component in order to take advantage of the logic in Unreal dialogue components for specifying gender, subtitles, speaker/listener, etc? Theres not really an ideal solution right now… if we handle all of our dialogue in the UE4 audio engine, then we cant mix/duck/DSP effect any of the dialogue with in the FMOD mixer. If we handle the dialogue in FMOD, then we cant make use any of the UE4 dialogue components (subtitling/localization being the most crucial).
Answered over here : http://www.fmod.org/questions/question/localisationsubtitles-in-ue4/