[EDIT: for a better readability, I split the bug part into another post]
I’d like to present a musical experimentation, which may become a music of our futur game (maybe for the strategic map). Instead of the traditional adaptive music used in video games, based either on layers cross-fading between them or by triggers to another region, I wanted to try a more probability-based approach. Some elements are triggered (or not) by probabilities, among a pool of possible elements. These probabilities are modified by a unique parameter, which allows to shape the dynamic of the piece. Unlike the traditional cross-fading method, this approach makes it possible to involve entire musical elements from start to finish, as if a real player jammed in.
In the second part (4:09), I show a parent event used as a “player” for the main event, setting the parameter value over time.
Please feel free to comment and tell me if i missed a more efficient way of doing some things!
After creating my main event, I first tried to modulate the parameter’s value with a very slow LFO (although I eventually ended up using a “player” parent event). It’s possible to have a LFO slower than 0.01 Hz but it only displays 0.00 Hz, and the miniature view is difficult to interpret (the scale doesn’t follow), however it works.
SUGGESTION: it would be great to be able to create a custom curve LFO modulator
I also have a question. For some tracks, I needed a nested looped event, set at the same tempo than the parent event. The main event and the nested event could be played in parallel a very long time, with no further synchronisation (other than the quantized start). Do I have the guarantee that since their tempos are the same, they will never desynchronize over time? I show one of those nested events at 1:50 in my video. From my tests, it seems to work great, though.